hi,
On Wed, 10 Sep 2008 08:32:15 -0700
Balbir Singh [EMAIL PROTECTED] wrote:
YAMAMOTO Takashi wrote:
hi,
hi,
here's a patch to implement memory.min_usage,
which controls the minimum memory usage for a cgroup.
it works similarly to mlock;
global memory reclamation
.
it's against 2.6.24-rc3-mm2 + memory.swappiness patch i posted here yesterday.
but it's logically independent from the swappiness patch.
todo:
- restrict non-root user's operation ragardless of owner of cgroupfs files?
- make oom killer aware of this?
YAMAMOTO Takashi
here's a new version
), instead.
I guess taste differs,...
yes, it seems different. :)
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
___
Devel mailing list
hi,
On Fri, 11 Jul 2008 17:34:46 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
hi,
my patch penalizes heavy-writer cgroups as task_dirty_limit does
for heavy-writer tasks. i don't think that it's necessary to be
tied to the memory subsystem because i merely want
hi,
On Wed, 6 Aug 2008 17:20:46 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
hi,
On Fri, 11 Jul 2008 17:34:46 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
hi,
my patch penalizes heavy-writer cgroups as task_dirty_limit does
for heavy
the number (or percentage or whatever) of
dirty pages in a memory cgroup, it can't be done independently from
the memory subsystem, of course. it's another story, tho.
YAMAMOTO Takashi
If chasing page-cgroup and memcg make this patch much more complex,
I think this style of implimentation is a choice
for memory reclaim under memcg.
to implement what you need, i think that we need to keep track of
the numbers of dirty-pages in each memory cgroups as a first step.
do you agree?
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux
hi,
On Wed, 9 Jul 2008 15:00:34 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
hi,
the following patch is a simple implementation of
dirty balancing for cgroups. any comments?
it depends on the following fix:
http://lkml.org/lkml/2008/7/8/428
A few
hi,
the following patch is a simple implementation of
dirty balancing for cgroups. any comments?
it depends on the following fix:
http://lkml.org/lkml/2008/7/8/428
YAMAMOTO Takashi
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
---
diff --git a/include/linux/cgroup_subsys.h b
) {
+ p = swap_info + type;
+
+ if ((p-flags SWP_ACTIVE) == SWP_ACTIVE) {
+ unsigned int i = 0;
+
+ spin_unlock(swap_lock);
what prevents the device from being swapoff'ed while you drop swap_lock?
YAMAMOTO Takashi
Hi, Yamamoto-san.
Thank you for your comment.
On Fri, 4 Jul 2008 15:54:31 +0900 (JST), [EMAIL PROTECTED] (YAMAMOTO
Takashi) wrote:
hi,
+/*
+ * uncharge all the entries that are charged to the group.
+ */
+void __swap_cgroup_force_empty(struct mem_cgroup *mem
)
This move happens before cgroup is replaced by another_cgroup.
currently cgroup_attach_task calls -attach callbacks after
assigning tsk-cgroups. are you talking about something else?
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https
On Tue, 10 Jun 2008 14:50:32 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
3. Use Lazy Manner
When the task moves, we can mark the pages used by it as
Wrong Charge, Should be dropped, and add them some penalty in the
LRU.
Pros
in CURR?
i think that you can redirect new charges in TASK to DEST
so that usage_of_task(TASK) will not grow.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
it
sooner than later.
#3 will not cause OOM-killer, I hope...A user can notice memory shortage.
we are talking about the case where a cgroup's working set is getting
hopelessly larger than its limit. i don't see why #3 will not
cause OOM-kill. can you explain?
YAMAMOTO Takashi
?) and exec() will flush the all usage.
i guess that moving long-running applications can be desirable
esp. for not so well-designed systems.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo
need this in addition to the limit?
ie. aren't their values always equal except the root cgroup?
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
a lot of todo,
it seems good enough as a start point to me.
so i'd like to withdraw mine.
nishimura-san, is it ok for you?
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
)
+ ret = RES_BELOW_LOW;
+ else if (counter-usage = counter-hwmark)
+ ret = RES_BELOW_HIGH;
+ }
+ spin_unlock_irqrestore(counter-lock, flags);
+ return ret;
+}
can't it be RES_OVER_LIMIT?
eg. when you lower the limit.
YAMAMOTO Takashi
.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel
);
+
+out:
+ return ret;
+}
+#endif
shouldn't it check the global usage (nr_swap_pages) as well?
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
BTW, I'm just trying to make my swapcontroller patch
that is rebased on recent kernel and implemented
as part of memory controller.
I'm going to submit it by the middle of May.
what's the status of this?
YAMAMOTO Takashi
___
Containers mailing list
from
the memory controller as far as possible.
(i don't want to complicate the memory controller.)
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
On Tue, May 13, 2008 at 8:21 PM, YAMAMOTO Takashi
[EMAIL PROTECTED] wrote:
Could you please mention what the limitations are? We could get those
fixed or
take another serious look at the mm-owner patches.
for example, its callback can't sleep.
You need to be able
assigned to a cgroup will win.
Doesn't that risk triggering the BUG_ON(mm-swap_cgroup != oldscg) in
swap_cgroup_attach() ?
which version of the patch you are looking at?
the following is the latest copy.
YAMAMOTO Takashi
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
---
--- linux
properly.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo
we attach an mm?
instead of a task, you mean?
because we count the number of ptes which points to swap
and ptes belong to an mm, not a task.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo
with cgroups, tho.
YAMAMOTO Takashi
i implemented shmem swap accounting. see below.
YAMAMOTO Takashi
the following is another swap controller, which was designed and
implemented independently from nishimura-san's one.
some random differences from nishimura-san's one:
- counts and limits the number
the last child again and again.
i think you want to reclaim from all cgroups under the curr_cgroup
including eg. children's children.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
the following is a new version of the patch.
changes from the previous:
- fix a BUG_ON in swap_cgroup_attach and add a comment about it.
YAMAMOTO Takashi
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
---
--- linux-2.6.25-rc3-mm1/init/Kconfig.BACKUP2008-03-05 15:45:50.0
YAMAMOTO Takashi wrote:
hi,
i tried to reproduce the large swap cache issue, but no luck.
can you provide a little more detailed instruction?
This issue also happens on generic 2.6.25-rc3-mm1
(with limitting only memory), so I think this issue is not
related to your patch.
I'm
.
YAMAMOTO Takashi
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
---
--- linux-2.6.25-rc3-mm1/init/Kconfig.BACKUP2008-03-05 15:45:50.0
+0900
+++ linux-2.6.25-rc3-mm1/init/Kconfig 2008-03-12 11:52:48.0 +0900
@@ -379,6 +379,12 @@ config CGROUP_MEM_RES_CTLR
Only enable
[ resending with To: akpm. Andrew, can you include this in -mm tree? ]
hi,
the following patch is to fix spurious EBUSY on cgroup removal.
YAMAMOTO Takashi
call mm_free_cgroup earlier.
otherwise a reference due to lazy mm switching can prevent cgroup removal.
Signed-off-by: YAMAMOTO Takashi
.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel
.
- anonymous objects (shmem) are not accounted.
- precise wrt moving tasks between cgroups.
this patch contains some unrelated small fixes which i've posted separately:
- exe_file fput botch fix
- cgroup_rmdir EBUSY fix
any comments?
YAMAMOTO Takashi
--- linux-2.6.25-rc3-mm1/init
-lock);
+ }
+ local_irq_restore(flags);
return ret;
}
what prevents the topology (in particular, -parent pointers) from
changing behind us?
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org
);
+ }
+ local_irq_restore(flags);
return ret;
}
what prevents the topology (in particular, -parent pointers) from
changing behind us?
YAMAMOTO Takashi
to answer myself: cgroupfs rename doesn't allow topological changes
in the first place.
btw, i think you need to do the same
.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel
-prev, struct page_cgroup, lru);
+ /* If there are still garbage, exit and retry */
+ if (pc-flags PAGE_CGROUP_FLAG_GARBAGE)
+ break;
i think mem_cgroup_isolate_pages needs a similar check.
YAMAMOTO Takashi
;
- VM_BUG_ON(!pc);
+ VM_BUG_ON(!page);
can't page be NULL here if mem_cgroup_uncharge clears pc-page behind us?
ie. bug.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
hi,
the following patch is to fix spurious EBUSY on cgroup removal.
YAMAMOTO Takashi
call mm_free_cgroup earlier.
otherwise a reference due to lazy mm switching can prevent cgroup removal.
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
---
--- linux-2.6.24-rc8-mm1/kernel/fork.c.BACKUP
to
cgroups directly, rather than via tasks. so it isn't straightforward to
use the information for other classification mechanisms like yours which
might not share the view of hierarchy with the memory subsystem.
YAMAMOTO Takashi
Thanks,
Hirokazu Takahashi.
YAMAMOTO Takashi
.
By the way, I think once a memory controller of cgroup is introduced, it will
help to track down which cgroup is the original source.
do you mean to make this a part of the memory subsystem?
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https
/$$/cgroup
memory:/
memory:/foo
imawoto%
YAMAMOTO Takashi
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
---
--- linux-2.6.24-rc8-mm1/kernel/cgroup.c.BACKUP 2008-01-23 14:43:29.0
+0900
+++ linux-2.6.24-rc8-mm1/kernel/cgroup.c2008-01-24 13:56:28.0
But even with seqlock, we'll have to disable irq.
for writers, sure.
readers don't need to disable irq.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
___
Devel mailing list
Devel@openvz.org
https://openvz.org/mailman/listinfo/devel
2.6.24-rc3-mm2 + memory.swappiness patch i posted here yesterday.
but it's logically independent from the swappiness patch.
todo:
- restrict non-root user's operation ragardless of owner of cgroupfs files?
- make oom killer aware of this?
YAMAMOTO Takashi
Signed-off-by: YAMAMOTO Takashi [EMAIL
activate_locked;
+#endif /* CONFIG_CGROUP_MEM_CONT */
+
Maybe
==
if (scan_global_lru(sc) !
mem_cgroup_canreclaim(page, sc-mem-cgroup))
goto activate_locked:
==
i don't think the decision belongs to callers.
(at least it wasn't my intention.)
YAMAMOTO Takashi
here's a trivial patch to implement memory.swappiness,
which controls swappiness for cgroup memory reclamation.
it's against 2.6.24-rc3-mm2.
YAMAMOTO Takashi
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
---
--- linux-2.6.24-rc3-mm2-swappiness/include/linux/memcontrol.h.BACKUP
2007-12
from making these values back to
the default.
YAMAMOTO Takashi
@@ -17,6 +17,9 @@
{
spin_lock_init(counter-lock);
counter-limit = (unsigned long long)LLONG_MAX;
+ counter-low_watermark = (unsigned long long)LLONG_MAX;
+ counter-high_watermark = (unsigned long long
);
} else /* being uncharged ? ...do relax */
break;
'active' seems unused.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
+static inline struct mem_cgroup_per_zone *
+mem_cgroup_zoneinfo(struct mem_cgroup *mem, int nid, int zid)
+{
+ if (!mem-info.nodeinfo[nid])
can this be true?
YAMAMOTO Takashi
+ return NULL;
+ return mem-info.nodeinfo[nid]-zoneinfo[zid
are going to throw away the res_counter abstraction.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
___
Devel mailing list
Devel
Balbir Singh wrote:
YAMAMOTO Takashi wrote:
+int batch_count = 128; /* XXX arbitrary */
Could we define and use something like MEM_CGROUP_BATCH_COUNT for now?
Later we could consider and see if it needs to be tunable. numbers are
hard to read in code.
although i don't think
GFP_ATOMIC.)
YAMAMOTO Takashi
Signed-off-by: YAMAMOTO Takashi [EMAIL PROTECTED]
---
--- linux-2.6.24-rc2-mm1-kame-pd/include/linux/res_counter.h.BACKUP
2007-11-14 16:05:48.0 +0900
+++ linux-2.6.24-rc2-mm1-kame-pd/include/linux/res_counter.h2007-11-22
15:14:32.0 +0900
@@ -32,6
before waking
these threads.
I'll start some tests on these patches.
thanks.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
currently it doesn't matter much because low_watermark is not used at all
as far as high_watermark is LLONG_MAX.
Don't we use by checking res_counter_below_low_watermark()?
yes, but only when we get above highwatermark.
YAMAMOTO Takashi
On Thu, 22 Nov 2007 17:34:20 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
+ /* usage is recorded in bytes */
+ total = mem-res.usage PAGE_SHIFT;
+ rss = mem_cgroup_read_stat(mem-stat, MEM_CGROUP_STAT_RSS);
+ return (rss * 100) / total
push out data
upto the low watermark from the cgroup.
i implemented something like that. (and rebased to 2.6.24-rc2-mm1.)
what's the best way to expose watermarks to userland is an open question.
i took the simplest way for now. do you have any suggestions?
YAMAMOTO Takashi
here's
);
+ }
}
struct page_cgroup *page_get_page_cgroup(struct page *page)
are they worth to be cached?
can't you use page_zonenum(pc-page)?
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo
mem_cgroup *mem)
+{
+ int zone;
+ mem-lrus[0] = mem-local_lru;
'zone' seems unused.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
upto the low watermark from the cgroup.
i implemented something like that. (and rebased to 2.6.24-rc2-mm1.)
what's the best way to expose watermarks to userland is an open question.
i took the simplest way for now. do you have any suggestions?
YAMAMOTO Takashi
--- ./include/linux
@@ -93,6 +95,11 @@ enum {
MEM_CGROUP_TYPE_MAX,
};
+enum charge_type {
+ MEM_CGROUP_CHARGE_TYPE_CACHE = 0,
+ MEM_CGROUP_CHARGE_TYPE_MAPPED = 0,
+};
+
should be different values. :-)
YAMAMOTO Takashi
___
Containers mailing list
= page_cgroup_to_zonestat_index(pc);
+ preempt_disable();
+ __mem_cgroup_zonestat_add(zstat, MEM_CGROUP_ZONESTAT_ACTIVE,
+ direction, index);
+ __mem_cgroup_zonestat_add(zstat, MEM_CGROUP_ZONESTAT_INACTIVE,
+ direction, index);
dec?
YAMAMOTO
,
+ direction, index);
dec?
direction(add value) is 1 or -1 here. Hmm, this is maybe confusing.
ok, I'll clean up this.
adding the same value to both of active and inactive seems wrong.
i think you want to subtract 'direction' from inactive here.
YAMAMOTO Takashi
hi,
i implemented background reclamation for your memory controller and
did a few benchmarks with and without it. any comments?
YAMAMOTO Takashi
-
time make -j4 bzImage in a cgroup with 64MB limit:
without patch:
real22m22.389s
user
page_cgroup *pc, bool active,
+ struct mem_cgroup *mem)
{
can mem be different from pc-mem_cgroup here?
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo
;
+}
i think the function name should be something which implies batching.
Hm, How about this ?
==
mem_cgroup_stat_add_atomic()
==
and add this
==
VM_BUG_ON(preempt_count() == 0)
==
_atomic sounds like a different thing to me. _nonpreemptible?
YAMAMOTO Takashi
;
/*
* get page-cgroup and clear it under lock.
+ * force-empty can drop page-cgroup without checking refcnt.
force_empty
+ char buf[2] = 0;
it should be static const unless you want a runtime assignment.
YAMAMOTO Takashi
On Wed, 10 Oct 2007 10:01:17 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
hi,
i implemented some statistics for your memory controller.
here's a new version.
changes from the previous:
- make counters per-cpu.
- value *= PAGE_SIZE
YAMAMOTO-san, I
;
+ if (unlikely(!atomic_inc_not_zero(pc-ref_cnt))) {
+ /* this page is under being uncharge ? */
+ unlock_page_cgroup(page);
cpu_relax() here?
YAMAMOTO Takashi
+ goto retry;
+ } else
+ goto done
hi,
i implemented some statistics for your memory controller.
here's a new version.
changes from the previous:
- make counters per-cpu.
- value *= PAGE_SIZE
YAMAMOTO Takashi
--- linux-2.6.23-rc8-mm2-stat/mm/memcontrol.c.BACKUP2007-10-01
17:19:57.0 +0900
to make counters per-cpu.
- more statistics.
YAMAMOTO Takashi
--- linux-2.6.23-rc8-mm2-stat/mm/memcontrol.c.BACKUP2007-10-01
17:19:57.0 +0900
+++ linux-2.6.23-rc8-mm2-stat/mm/memcontrol.c 2007-10-04 12:42:05.0
+0900
@@ -25,6 +25,7 @@
#include linux/backing-dev.h
#include
hi,
i implemented some statistics for your memory controller.
it's tested with 2.6.23-rc2-mm2 + memory controller v7.
i think it can be applied to 2.6.23-rc4-mm1 as well.
YAMOMOTO Takshi
todo: something like nr_active/inactive in /proc/vmstat.
--- ./mm/memcontrol.c.BACKUP2007-08-29
hi,
thanks for comments.
Hi,
On Fri, 7 Sep 2007 12:39:42 +0900 (JST)
[EMAIL PROTECTED] (YAMAMOTO Takashi) wrote:
+enum mem_container_stat_index {
+ /*
+* for MEM_CONTAINER_TYPE_ALL, usage == pagecache + rss
+*/
+ MEMCONT_STAT_PAGECACHE,
+ MEMCONT_STAT_RSS
YAMAMOTO Takashi wrote:
Allow tasks to migrate from one container to the other. We migrate
mm_struct's mem_container only when the thread group id migrates.
+ /*
+ * Only thread group leaders are allowed to migrate, the mm_struct is
+ * in effect owned by the leader
-pid)
+ goto out;
does it mean that you can't move a process between containers
once its thread group leader exited?
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
+echo 1 /proc/sys/vm/drop_pages will help get rid of some of the pages
+cached in the container (page cache pages).
drop_caches
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo
YAMAMOTO Takashi wrote:
+ lock_meta_page(page);
+ /*
+ * Check if somebody else beat us to allocating the meta_page
+ */
+ race_mp = page_get_meta_page(page);
+ if (race_mp) {
+ kfree(mp);
+ mp = race_mp;
+ atomic_inc(mp-ref_cnt
YAMAMOTO Takashi wrote:
Choose if we want cached pages to be accounted or not. By default both
are accounted for. A new set of tunables are added.
echo -n 1 mem_control_type
switches the accounting to account for only mapped pages
echo -n 2 mem_control_type
switches
MEM_CONTAINER_TYPE_ALL is 3, not 2.
YAMAMOTO Takashi
+enum {
+ MEM_CONTAINER_TYPE_UNSPEC = 0,
+ MEM_CONTAINER_TYPE_MAPPED,
+ MEM_CONTAINER_TYPE_CACHED,
+ MEM_CONTAINER_TYPE_ALL,
+ MEM_CONTAINER_TYPE_MAX,
+} mem_control_type;
+
+static struct mem_container init_mem_container;
+ mem
);
+ res_counter_uncharge(mem-res, 1);
+ goto done;
+ }
i think you need css_put here.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
the lists without mem_cont-lru_lock held?
- what prevents mem_container_uncharge from freeing this meta_page
behind us?
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
On 7/10/07, YAMAMOTO Takashi [EMAIL PROTECTED] wrote:
hi,
diff -puN mm/memory.c~mem-control-accounting mm/memory.c
--- linux-2.6.22-rc6/mm/memory.c~mem-control-accounting 2007-07-05
13:45:18.0 -0700
+++ linux-2.6.22-rc6-balbir/mm/memory.c 2007-07-05
13:45
container aware.
Signed-off-by: Balbir Singh [EMAIL PROTECTED]
it seems that the number of pages to scan (nr_active/nr_inactive
in shrink_zone) is calculated from NR_ACTIVE and NR_INACTIVE of the zone,
even in the case of per-container reclaim. is it intended?
YAMAMOTO Takashi
);
ditto.
can you check the rest of the patch by yourself? thanks.
YAMAMOTO Takashi
___
Containers mailing list
[EMAIL PROTECTED]
https://lists.linux-foundation.org/mailman/listinfo/containers
___
Devel
86 matches
Mail list logo