On Mon, 19 Feb 2007 12:20:42 +0530
Balbir Singh [EMAIL PROTECTED] wrote:
+int memctlr_mm_overlimit(struct mm_struct *mm, void *sc_cont)
+{
+ struct container *cont;
+ struct memctlr *mem;
+ long usage, limit;
+ int ret = 1;
+
+ if (!sc_cont)
+ goto out;
+
On Tue, 10 Jul 2007 16:39:43 -0500
Serge E. Hallyn [EMAIL PROTECTED] wrote:
In the list of stakeholders, I try to guess based on past comments and
contributions what *general* area they are most likely to contribute in.
I may try to narrow those down later, but am just trying to get something
On Sat, 28 Jul 2007 01:39:37 +0530
Balbir Singh [EMAIL PROTECTED] wrote:
At OLS, the resource management BOF, it was discussed that we need to manage
RSS and unmapped page cache together. This patchset is a step towards that
Can I make a question ? Why limiting RSS instead of # of used pages
Thank you for documentaion. How about adding following topics ?
- Benefit and Purpose. When this function help a user.
- What is accounted as RSS.
- What is accounted as page-cache.
- What are not accoutned now.
- When a page is accounted (charged.)
- about mem_control_type
- When a user can
On Thu, 30 Aug 2007 04:07:11 +0530
Balbir Singh [EMAIL PROTECTED] wrote:
1. Several people recommended it
2. Herbert mentioned that they've moved to that interface and it
was working fine for them.
I have no strong opinion. But how about Mega bytes ? (too big ?)
There will be no rounding
Hi,
On Fri, 7 Sep 2007 12:39:42 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
+enum mem_container_stat_index {
+ /*
+ * for MEM_CONTAINER_TYPE_ALL, usage == pagecache + rss
+ */
+ MEMCONT_STAT_PAGECACHE,
+ MEMCONT_STAT_RSS,
+
+ /*
+ * redundant;
)
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
---
mm/vmscan.c |9 ++---
1 file changed, 2 insertions(+), 7 deletions(-)
Index: linux-2.6.23-rc4-mm1.bak/mm/vmscan.c
===
--- linux-2.6.23-rc4-mm1.bak.orig/mm/vmscan.c
+++ linux
On Thu, 13 Sep 2007 13:11:35 +0400
Pavel Emelyanov [EMAIL PROTECTED] wrote:
First of all - why do we need this kind of control. The major
pros is that kernel memory control protects the system
from DoS attacks by processes that live in container. As our
experience shows many exploits simply
On Mon, 10 Sep 2007 22:40:49 +0530
Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
+ tg-cfs_rq = kzalloc(sizeof(cfs_rq) * num_possible_cpus(), GFP_KERNEL);
+ if (!tg-cfs_rq)
+ goto err;
+ tg-se = kzalloc(sizeof(se) * num_possible_cpus(), GFP_KERNEL);
+ if (!tg-se)
+
PROTECTED] kamezawa]# cat /opt/cgroup/memory.limit_in_bytes
unlimited
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
include/linux/res_counter.h |1 +
kernel/res_counter.c| 11 ---
2 files changed, 9 insertions(+), 3 deletions(-)
Index: linux-2.6.23-rc8-mm1/include/linux
On Tue, 25 Sep 2007 16:19:18 +0530
Balbir Singh [EMAIL PROTECTED] wrote:
Hi, Kamezawa-San,
Hi,
Your changes make sense, but not CLUI (Command Line Usage) sense.
1. The problem is that when we mix strings with numbers, tools that
parse/use get confused and complicated
yes, maybe.
2.
On Tue, 25 Sep 2007 19:14:53 +0400
Pavel Emelyanov [EMAIL PROTECTED] wrote:
KAMEZAWA Hiroyuki wrote:
On Tue, 25 Sep 2007 17:34:00 +0400
Pavel Emelyanov [EMAIL PROTECTED] wrote:
Well, no container may have the ULLMAX (or what is it?) bytes
touched/allocated :) So I don't see any need
-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
---
include/linux/memcontrol.h | 13 -
mm/memcontrol.c| 113 +
mm/vmscan.c| 16 +-
3 files changed, 140 insertions(+), 2 deletions(-)
Index: linux-2.6.23-rc8-mm1/mm/memcontrol.c
On Wed, 26 Sep 2007 08:46:20 -0700
Badari Pulavarty [EMAIL PROTECTED] wrote:
On Wed, 2007-09-26 at 18:14 +0900, KAMEZAWA Hiroyuki wrote:
This is an experimental patch for drop pages in empty cgroup.
comments ?
Hmm.. Patch doesn't seems to help :(
elm3b155:/dev/cgroup/xxx # cat
Hi, thank you for review.
On Mon, 01 Oct 2007 09:46:02 +0530
Balbir Singh [EMAIL PROTECTED] wrote:
@@ -424,17 +424,80 @@ void mem_cgroup_uncharge(struct page_cgr
if (atomic_dec_and_test(pc-ref_cnt)) {
page = pc-page;
lock_page_cgroup(page);
- mem =
Current implementation of memory cgroup controller does following in migration.
1. uncharge when unmapped.
2. charge again when remapped.
Consider migrate a page from OLD to NEW.
In following case, memory (for page_cgroup) will leak.
1. charge OLD page as page-cache. (charge = 1
2. A process
mem_cgroup_isolate_pages() igonres
!PageLRU pages.
Tested and worked well in ia64/NUMA box.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
---
include/linux/memcontrol.h | 22 +++
mm/memcontrol.c| 62 ++---
mm/migrate.c | 13
by mem_cgroup_prepare_migration().
- move mem_cgroup_prepare_migration() after goto:
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
Index: linux-2.6.23-rc8-mm2/mm/memcontrol.c
===
--- linux-2.6.23-rc8-mm2.orig/mm/memcontrol.c
+++ linux
Hi, Balbir-san
This is a patch set against memory cgroup I have now.
Reflected comments I got.
=
[1] charge refcnt fix patch - avoid charging against a page which is being
uncharged.
[2] fix-err-handling patch - remove unnecesary unlock_page_cgroup()
.
This patch add a test at charging to verify page_cgroup's refcnt is
greater than 0. If not, unlock and retry.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c |9 +++--
1 file changed, 7 insertions(+), 2 deletions(-)
Index: linux-2.6.23-rc8-mm2/mm/memcontrol.c
This unlock_page_cgroup() is unnecessary.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c |2 --
1 file changed, 2 deletions(-)
Index: linux-2.6.23-rc8-mm2/mm/memcontrol.c
===
--- linux-2.6.23-rc8-mm2.orig
/migration.
__isolate_lru_page() doesn't moves page !PageLRU pages, then, it will
be safe to avoid touching the page and its page_cgroup.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
Index: devel-2.6.23
:
- reflected comments.
- divided a patche to !PageLRU patch and migration patch.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
include/linux/memcontrol.h | 19 +++
mm/memcontrol.c| 43 +++
mm/migrate.c
cgroup successfully.
Tested and worked well on x86_64/fake-NUMA system.
Changelog:
- added a new interface force_relcaim.
- changes spin_lock to spin_lock_irqsave().
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 79
This patch adds follwoing functions.
- clear_page_cgroup(page, pc)
- page_cgroup_assign_new_page_group(page, pc)
Mainly for cleaunp.
A manner check page-cgroup again after lock_page_cgroup() is
implemented in straight way.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm
On Tue, 09 Oct 2007 16:39:48 +0530
Balbir Singh [EMAIL PROTECTED] wrote:
+static inline int
+page_cgroup_assign_new_page_cgroup(struct page *page, struct page_cgroup
*pc)
+{
+ int ret = 0;
+
+ lock_page_cgroup(page);
+ if (!page_get_page_cgroup(page))
+
On Wed, 10 Oct 2007 07:31:38 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
- atomic_inc(pc-ref_cnt);
- goto done;
+ if (unlikely(!atomic_inc_not_zero(pc-ref_cnt))) {
+ /* this page is under being uncharge ? */
+
On Tue, 9 Oct 2007 20:26:42 +0900
KAMEZAWA Hiroyuki [EMAIL PROTECTED] wrote:
+ */
+ if (clear_page_cgroup(page, pc) == pc) {
OK.. so we've come so far and seen that pc has changed underneath us,
what do we do with this pc?
Hmm... How about
On Wed, 10 Oct 2007 10:01:17 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
hi,
i implemented some statistics for your memory controller.
here's a new version.
changes from the previous:
- make counters per-cpu.
- value *= PAGE_SIZE
YAMAMOTO-san, I like this
This set is a fix for memory cgroup against 2.6.23-rc8-mm2.
Not including any new feature.
If this is merged to the next -mm, I'm happy.
Patches:
[1/5] ... fix refcnt handling in charge mem_cgroup_charge()
[2/5] ... fix error handling path in mem_cgroup_charge()
[3/5] ... check page-cgroup under
.
This patch add a test at charging to verify page_cgroup's refcnt is
greater than 0. If not, unlock and retry.
Changelog v1-v2:
* added cpu_relax() before retry.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 10 --
1 file changed, 8 insertions(+), 2 deletions
This unlock_page_cgroup() is unnecessary.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c |2 --
1 file changed, 2 deletions(-)
Index: devel-2.6.23-rc8-mm2/mm/memcontrol.c
===
--- devel-2.6.23-rc8-mm2.orig
:
- reflected comments.
- divided a patch to !PageLRU patch and migration patch.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
include/linux/memcontrol.h | 19 +++
mm/memcontrol.c| 43 +++
mm/migrate.c | 14
for reclaiming/migration.
Because __isolate_lru_page() doesn't moves page !PageLRU pages, it will
be safe to avoid touching !PageLRU() page and its page_cgroup.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 13 ++---
1 file changed, 10 insertions(+), 3 deletions(-)
Index
in mem_cgroup_uncharge() error path, but this is planned to be
removed by other patch
Note:
- a comment in mem_cgroup_uncharge() will be removed by force-empty patch
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 100
This patch set includes following functions for memory cgroup.
Based on 2.6.23-rc8-mm2 + My bugfix patch set.
- memory.force_empty ... uncharge all pages in cgroup.
- memory.stat... status accounting in cgroup.
I merged YAMAMOTO-san's patch set for statitstics to this set.
[1/5] ...
Add PCGF_PAGECACHE flag to page_cgroup to remember this page is
charged as page-cache.
This is very useful for implementing precise accounting in memory cgroup.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
mm/memcontrol.c | 18
-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 102
1 file changed, 95 insertions(+), 7 deletions(-)
Index: devel-2.6.23-rc8-mm2/mm/memcontrol.c
===
--- devel
Remember page_cgroup is on active_list or not in page_cgroup-flags.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
mm/memcontrol.c | 12
1 file changed, 8 insertions(+), 4 deletions(-)
Index: devel-2.6.23-rc8-mm2/mm
Show accounted information of memory cgroup by memory.stat file
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 54 ++
1 file changed, 54 insertions(+)
Index: devel
by kzalloc().
Problem:
- charge/uncharge count can be overflow. But they are unnecessary ?
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
mm/memcontrol.c | 120 ++--
1 file changed, 116
On Thu, 11 Oct 2007 17:35:40 +0530
Balbir Singh [EMAIL PROTECTED] wrote:
KAMEZAWA Hiroyuki wrote:
+
+static inline void
+mem_cgroup_page_migration(struct page *page, struct page *newpage);
Typo, the semicolon needs to go :-)
Oh, thanks!, will send updated version later.
-Kame
-NUMA box.
Changelog v2 - v3
- fixed typo in !CONFIG_CGROUP_MEM_CONT case.
Changelog v1 - v2:
- reflected comments.
- divided a patch to !PageLRU patch and migration patch.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
include/linux/memcontrol.h | 19 +++
mm
On Mon, 15 Oct 2007 15:37:01 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
- changed from u64 to s64
why?
+/*
+ * For batchingmem_cgroup_charge_statistics()(see below).
+ */
+static inline void mem_cgroup_stat_add(struct mem_cgroup_stat *stat,
+enum
On Tue, 16 Oct 2007 07:38:23 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
+/*
+ * For batchingmem_cgroup_charge_statistics()(see below).
+ */
+static inline void mem_cgroup_stat_add(struct mem_cgroup_stat *stat,
+enum mem_cgroup_stat_index idx,
On Tue, 16 Oct 2007 09:15:49 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
Index: devel-2.6.23-rc8-mm2/mm/memcontrol.c
===
--- devel-2.6.23-rc8-mm2.orig/mm/memcontrol.c
+++ devel-2.6.23-rc8-mm2/mm/memcontrol.c
@@
This patch set adds
- force_empty interface, which drops all charges in memory cgroup.
This enables rmdir() against unused memory cgroup.
- the memory cgroup statistics accounting.
Based on 2.6.23-mm1 + http://lkml.org/lkml/2007/10/12/53
Changes from previous version is
- merged comments.
.
- changes spin_lock to spin_lock_irqsave().
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 102
1 file changed, 95 insertions(+), 7 deletions(-)
Index: devel-2.6.23-mm1/mm/memcontrol.c
Add PCGF_PAGECACHE flag to page_cgroup to remember this page is
charged as page-cache.
This is very useful for implementing precise accounting in memory cgroup.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
mm/memcontrol.c | 18
Remember page_cgroup is on active_list or not in page_cgroup-flags.
Against 2.6.23-mm1.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
mm/memcontrol.c | 12
1 file changed, 8 insertions(+), 4 deletions(-)
Index: devel-2.6.23
(account and show info)
- changed from u64 to s64
- added mem_cgroup_stat_add() and batched statistics modification logic.
- removed stat init code because mem_cgroup is allocated by kzalloc().
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
mm
Show accounted information of memory cgroup by memory.stat file
Changelog v1-v2
- dropped Charge/Uncharge entry.
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 52
1
On Tue, 16 Oct 2007 21:17:24 -0700 (PDT)
David Rientjes [EMAIL PROTECTED] wrote:
On Tue, 16 Oct 2007, KAMEZAWA Hiroyuki wrote:
Remember page_cgroup is on active_list or not in page_cgroup-flags.
Against 2.6.23-mm1.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
Signed-off
to spin_lock_irqsave().
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 102
1 file changed, 95 insertions(+), 7 deletions(-)
Index: devel-2.6.23-mm1/mm/memcontrol.c
On Tue, 16 Oct 2007 11:28:43 -0700
Andrew Morton [EMAIL PROTECTED] wrote:
I would prefer these patches to go in once the fixes that you've posted
earlier have gone in (the migration fix series). I am yet to test the
migration fix per-se, but the series seemed quite fine to me. Andrew
could
On Wed, 17 Oct 2007 10:35:58 +0530
Balbir Singh [EMAIL PROTECTED] wrote:
If the only use of this is for rmdir, why not just make it part of the
rmdir operation on the memory cgroup if there are no tasks by default?
That's a good idea, but sometimes an administrator might want to force
On Wed, 17 Oct 2007 19:56:36 -0700 (PDT)
[EMAIL PROTECTED] (Paul Menage) wrote:
+ seq_printf(m, %s\t%d\t%d\n,
+ss-name, ss-root-subsys_bits,
+ss-root-number_of_cgroups);
}
Because subsys_bits is unsigned long, then %lu or %lx
On Sat, 20 Oct 2007 07:33:38 +0900
KAMEZAWA Hiroyuki [EMAIL PROTECTED] wrote:
One is for debug, I'd like to check swapiness - RSS.failure:CACHE.failure
releationship. It's ok to turn these params to be DEBUG option.
AH...but it's maybe better to check # of page fault in cgroup directly.
I'll
.
Works well on my fake NUMA system.
I think we can add numastat based on this.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 97
1 file changed, 71 insertions(+), 26 deletions(-)
Index: devel-2.6.23-mm1/mm/memcontrol.c
- fixed typo
- changes buf[2]=0 to static const
Changelog v2 - v3:
- changed the name from force_reclaim to force_empty.
Changelog v1 - v2:
- added a new interface force_reclaim.
- changes spin_lock to spin_lock_irqsave().
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm
Show accounted information of memory cgroup by memory.stat file
Changelog v1-v2
- dropped Charge/Uncharge entry.
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 52
1
-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
include/linux/cgroup.h |1 +
kernel/cgroup.c|7 +++
2 files changed, 8 insertions(+)
Index: devel-2.6.23-mm1/include/linux/cgroup.h
===
--- devel-2.6.23-mm1.orig/include/linux
Remember page_cgroup is on active_list or not in page_cgroup-flags.
Against 2.6.23-mm1.
Changelog v1-v2
- moved #define to out-side of struct definition
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
mm/memcontrol.c | 12
Add PCGF_PAGECACHE flag to page_cgroup to remember this page is
charged as page-cache.
This is very useful for implementing precise accounting in memory cgroup.
Changelog v1 - v2
- moved #define to out-side of struct definition
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
Signed-off
Because NODE_DATA(node)-node_zonelists[] is guaranteed to contain
all necessary zones, it is not necessary to use for_each_online_node.
And this for_each_online_node() makes reclaim routine start always
from node 0. This is bad.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/vmscan.c
These patches are for memory cgroup on my queue.
Just dumping before week-end.
I'd like to post these against the next -mm.
This is RFC again.
Comments for previous version is reflected AMAP, thanks.
Some of patches has no change. Several pathces are new.
[1/10] fix
On Fri, 19 Oct 2007 09:54:23 -0700
Paul Menage [EMAIL PROTECTED] wrote:
On 10/19/07, KAMEZAWA Hiroyuki [EMAIL PROTECTED] wrote:
cgroup's resource has failure counter. But I think memory cgroup
has 2 types of failure
- failure of cache
- failure of RSS
Why do you think
This patch adds pre_destroy handler for mem_cgroup and try to make
mem_cgroup empty at rmdir().
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c |8
1 file changed, 8 insertions(+)
Index: devel-2.6.23-mm1/mm/memcontrol.c
and show info)
- changed from u64 to s64
- added mem_cgroup_stat_add() and batched statistics modification logic.
- removed stat init code because mem_cgroup is allocated by kzalloc().
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
mm
PAGE_SIZE.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 13 +
1 file changed, 13 insertions(+)
Index: devel-2.6.23-mm1/mm/memcontrol.c
===
--- devel-2.6.23-mm1.orig/mm/memcontrol.c
+++ devel-2.6.23
On Tue, 23 Oct 2007 09:30:53 +0530
Balbir Singh [EMAIL PROTECTED] wrote:
KAMEZAWA Hiroyuki wrote:
Because NODE_DATA(node)-node_zonelists[] is guaranteed to contain
all necessary zones, it is not necessary to use for_each_online_node.
And this for_each_online_node() makes reclaim routine
On Wed, 24 Oct 2007 19:26:34 +0530
Balbir Singh [EMAIL PROTECTED] wrote:
Could we define
enum {
MEM_CGROUP_CHARGE_TYPE_CACHE = 0,
MEM_CGROUP_CHARGE_TYPE_MAPPED = 1,
};
and use the enums here and below.
Okay, I'll use this approach.
Thanks,
-Kame
On Wed, 24 Oct 2007 20:29:08 +0530
Balbir Singh [EMAIL PROTECTED] wrote:
+ for_each_possible_cpu(cpu) {
+ int nid = cpu_to_node(cpu);
+ struct mem_cgroup_stat_cpu *mcsc;
+ if (sizeof(*mcsc) PAGE_SIZE)
+ mcsc =
Hi, this is updated set of enhancements for memory cgroup in my box.
I'd like to post some of these when the next -mm is shipped.
(but rebase and more test should be done.)
Any comments are welcome.
Changes from previous sets:
- added comments/explanation.
- dropped failcnt patch
- modified
-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/vmscan.c |8 +++-
1 file changed, 3 insertions(+), 5 deletions(-)
Index: devel-2.6.23-mm1/mm/vmscan.c
===
--- devel-2.6.23-mm1.orig/mm/vmscan.c
+++ devel-2.6.23-mm1/mm/vmscan.c
:
- adjusted to 2.6.23-mm1
- fixed typo
- changes buf[2]=0 to static const
Changelog v2 - v3:
- changed the name from force_reclaim to force_empty.
Changelog v1 - v2:
- added a new interface force_reclaim.
- changes spin_lock to spin_lock_irqsave().
Signed-off-by: KAMEZAWA Hiroyuki
and added VM_BUGON
Changes from original:
- divided into 2 patch (account and show info)
- changed from u64 to s64
- added mem_cgroup_stat_add() and batched statistics modification logic.
- removed stat init code because mem_cgroup is allocated by kzalloc().
Signed-off-by: KAMEZAWA Hiroyuki
This patch adds pre_destroy handler for mem_cgroup and try to make
mem_cgroup empty at rmdir().
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c |8
1 file changed, 8 insertions(+)
Index: devel-2.6.23-mm1/mm/memcontrol.c
information is shown in memory.stat
file.
This patch changes early_init from 1 to 0 for using kmalloc/vmalloc at boot.
Changelog v1 - v2:
- changed from per-node to per-zone.
- just count acitve/inactive
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 171
pre_destroy(),
the kernel keeps the rule destroy() against subsystem is called only
when refcnt=0. and allows css's ref to be used by other objects
than tasks.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
include/linux/cgroup.h |1 +
kernel/cgroup.c|7 +++
2 files changed, 8
On Tue, 30 Oct 2007 17:35:01 +0530
Balbir Singh [EMAIL PROTECTED] wrote:
KAMEZAWA Hiroyuki wrote:
Because NODE_DATA(node)-node_zonelists[] is guaranteed to contain
all necessary zones, it is not necessary to use for_each_online_node.
And this for_each_online_node() makes reclaim routine
On Tue, 30 Oct 2007 21:22:52 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
@@ -93,6 +95,11 @@ enum {
MEM_CGROUP_TYPE_MAX,
};
+enum charge_type {
+ MEM_CGROUP_CHARGE_TYPE_CACHE = 0,
+ MEM_CGROUP_CHARGE_TYPE_MAPPED = 0,
+};
+
should be different values. :-)
On Tue, 30 Oct 2007 21:32:59 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
+
+/*
+ * Per-zone statistics.
+ * Please be carefull. The array can be very big on envrionments whic has
+ * very big MAX_NUMNODES . Adding new stat member to this will eat much
memory.
+ * Only
On Tue, 30 Oct 2007 23:58:00 +0530
Balbir Singh [EMAIL PROTECTED] wrote:
Dave Hansen wrote:
On Tue, 2007-10-30 at 20:14 +0900, KAMEZAWA Hiroyuki wrote:
- for_each_online_node(node) {
- zones =
NODE_DATA(node)-node_zonelists[target_zone].zones
usual (default) zonelist order.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/vmscan.c | 10 --
1 file changed, 4 insertions(+), 6 deletions(-)
Index: devel-2.6.23-mm1/mm/vmscan.c
===
--- devel-2.6.23-mm1.orig/mm
Hi, this set is for enhancements for memory cgroup I have now.
Tested on x86_64 and passed some tests.
All are against 2.6.23-mm1 + previous memory cgroup bugfix patches.
Any comments are welcome.
Patch contents:
[1/8] fix zone handling in try_to_free_mem_cgroup_page
This
Remember page_cgroup is on active_list or not in page_cgroup-flags.
Against 2.6.23-mm1.
Changelog v2-v3
- renamed #define PCGF_ACTIVE to PAGE_CGROUP_FLAG_ACTIVE.
Changelog v1-v2
- moved #define to out-side of struct definition
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
Signed-off
- fixed typo
- changes buf[2]=0 to static const
Changelog v2 - v3:
- changed the name from force_reclaim to force_empty.
Changelog v1 - v2:
- added a new interface force_reclaim.
- changes spin_lock to spin_lock_irqsave().
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm
This patch adds pre_destroy handler for mem_cgroup and try to make
mem_cgroup empty at rmdir().
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c |8
1 file changed, 8 insertions(+)
Index: devel-2.6.23-mm1/mm/memcontrol.c
(),
the kernel keeps the rule destroy() against subsystem is called only
when refcnt=0. and allows css ref to be used by other objects
than tasks.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
include/linux/cgroup.h |1 +
kernel/cgroup.c|7 +++
2 files changed, 8
On Fri, 9 Nov 2007 07:14:22 + (GMT)
Hugh Dickins [EMAIL PROTECTED] wrote:
If we're charging rss and we're charging cache, it seems obvious that
we should be charging swapcache - as has been done. But in practice
that doesn't work out so well: both swapin readahead and swapoff leave
the
On Fri, 9 Nov 2007 07:13:22 + (GMT)
Hugh Dickins [EMAIL PROTECTED] wrote:
mem_cgroup_charge_common shows a tendency to OOM without good reason,
when a memhog goes well beyond its rss limit but with plenty of swap
available. Seen on x86 but not on PowerPC; seen when the next patch
omits
On Mon, 12 Nov 2007 04:57:03 + (GMT)
Hugh Dickins [EMAIL PROTECTED] wrote:
Could I confirm a change in the logic ?
* Before this patch, wrong swapcache charge is added to one who
called try_to_free_page().
try_to_free_pages? No, I don't think any wrong charge was made
there.
Define function to remember reclaim priority (as zone-prev_priority)
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
include/linux/memcontrol.h | 23 +++
mm/memcontrol.c| 20
2 files changed, 43 insertions(+)
Index: linux-2.6.24-rc2
On Wed, 14 Nov 2007 17:41:31 +0900
KAMEZAWA Hiroyuki [EMAIL PROTECTED] wrote:
This patch adds nid/zoneid value to page cgroup.
This helps per-zone accounting for memory cgroup and reclaim routine.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
Sigh...
[EMAIL PROTECTED]
Sorry,
-Kame
-zone lru for memory cgroup.
I think this patch's style implementation can be adjusted to zone's lru
implementation changes if happens.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 117 +++-
1 file changed, 100 insertions
as
zone-prev_priority.
This value is used for calc reclaim_mapped.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/vmscan.c | 155
1 file changed, 105 insertions(+), 50
Just for clean up for later patch for avoiding dirty nesting
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/vmscan.c | 184 +++-
1 file changed, 97 insertions(+), 87 deletions(-)
Index: linux-2.6.24-rc2-mm1/mm/vmscan.c
pages is TOTAL - INACTIVE.
This patch turns memory controler's early_init to be 0 for calling
kmalloc().
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/memcontrol.c | 131 +++-
1 file changed, 130 insertions(+), 1 deletion(-)
Index
add macro scan_global_lru().
This is used to detect which scan_control scans global lru or mem_cgroup lru.
Signed-off-by: KAMEZAWA Hiroyuki [EMAIL PROTECTED]
mm/vmscan.c | 17 -
1 file changed, 12 insertions(+), 5 deletions(-)
Index: linux-2.6.24-rc2-mm1/mm/vmscan.c
1 - 100 of 728 matches
Mail list logo