Signed-off-by: Glauber Costa [EMAIL PROTECTED]
Signed-off-by: Ravikiran Thirumalai [EMAIL PROTECTED]
Acked-by: Shai Fultheim [EMAIL PROTECTED]
---
arch/x86/Kconfig |3 ++
arch/x86/kernel/vsmp_64.c | 56 +
2 files changed, 58 insertions
Signed-off-by: Glauber Costa [EMAIL PROTECTED]
Signed-off-by: Ravikiran Thirumalai [EMAIL PROTECTED]
Acked-by: Shai Fultheim [EMAIL PROTECTED]
---
arch/x86/kernel/vsmp_64.c |7 +++
1 files changed, 7 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kernel/vsmp_64.c b/arch/x86/kernel
Hi,
This series of five patches turns the vsmp architecture support in
x86_64 into a paravirt client. If PARAVIRT is on, the probe
function vsmp_init() is run unconditionally, patching the necessary
irq functions accordingly if running ontop of such box.
--
To unsubscribe from this list: send
Change Makefile so vsmp_64.o object is dependent
on PARAVIRT, rather than X86_VSMP
Signed-off-by: Glauber Costa [EMAIL PROTECTED]
Signed-off-by: Ravikiran Thirumalai [EMAIL PROTECTED]
Acked-by: Shai Fultheim [EMAIL PROTECTED]
---
arch/x86/kernel/Makefile |2 +-
1 files changed, 1 insertions
Signed-off-by: Glauber Costa [EMAIL PROTECTED]
Signed-off-by: Ravikiran Thirumalai [EMAIL PROTECTED]
Acked-by: Shai Fultheim [EMAIL PROTECTED]
---
arch/x86/kernel/vsmp_64.c |8
1 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kernel/vsmp_64.c b/arch/x86/kernel
On 10/29/2012 07:26 PM, JoonSoo Kim wrote:
2012/10/19 Glauber Costa glom...@parallels.com:
+void kmem_cache_destroy_memcg_children(struct kmem_cache *s)
+{
+ struct kmem_cache *c;
+ int i;
+
+ if (!s-memcg_params)
+ return;
+ if (!s-memcg_params
On 10/30/2012 07:31 PM, Christoph Lameter wrote:
On Fri, 26 Oct 2012, JoonSoo Kim wrote:
2012/10/25 Christoph Lameter c...@linux.com:
On Wed, 24 Oct 2012, Pekka Enberg wrote:
So I hate this patch with a passion. We don't have any fastpaths in
mm/slab_common.c nor should we. Those should be
so it can fail
(in patch 2, for simplicity).
I consider a general hook acceptable and useful, and is the simplest solution to
the problem I face. Let me know what you guys think of it.
Glauber Costa (2):
generalize post_clone into post_create
allow post_create to fail
Documentation/cgroups
Initialization in post_create can theoretically fail (although it won't
in cpuset). The comment in cgroup.c even seem to indicate that the
possibility of failure was the intention.
It is not terribly complicated, so let us just allow it to fail.
Signed-off-by: Glauber Costa glom...@parallels.com
controller is currently the only in-tree user, and is converted.
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Tejun Heo t...@kernel.org
CC: Michal Hocko mho...@suse.cz
CC: Li Zefan lize...@huawei.com
---
Documentation/cgroups/cgroups.txt | 13 +++--
include/linux/cgroup.h
}
prepare_to_wait(cgroup_rmdir_waitq, wait, TASK_INTERRUPTIBLE);
- if (!cgroup_clear_css_refs(cgrp)) {
- mutex_unlock(cgroup_mutex);
- /*
- * Because someone may call cgroup_wakeup_rmdir_waiter() before
- *
On 10/31/2012 08:22 AM, Tejun Heo wrote:
Because -pre_destroy() could fail and can't be called under
cgroup_mutex, cgroup destruction did something very ugly.
1. Grab cgroup_mutex and verify it can be destroyed; fail otherwise.
2. Release cgroup_mutex and call -pre_destroy().
3.
On 10/31/2012 08:22 AM, Tejun Heo wrote:
Hello, guys.
cgroup removal path is quite ugly. A lot of the ugliness comes from
the weird design which allows -pre_destroy() to fail and the feature
to drain existing CSS reference counts before committing to removal.
Both mean that it should be
On 10/31/2012 08:57 PM, Tejun Heo wrote:
I have a patch queued to add -pre_destroy() - different from
Glauber's in that it can't fail, so we'll have
-create()
-post_create()
-pre_destroy()
-destroy()
Where -create() may fail but none other can.
On 10/31/2012 09:10 PM, Tejun Heo wrote:
Hello, Glauber.
On Wed, Oct 31, 2012 at 10:06 AM, Glauber Costa glom...@parallels.com wrote:
This is not the topic of this thread, but since you brought it:
If you take a look at the description patch in the patch I sent, the
problem I outlined
On 10/31/2012 09:18 PM, Tejun Heo wrote:
Hello,
On Wed, Oct 31, 2012 at 05:49:33PM +0400, Glauber Costa wrote:
The only think that drew my attention is that you are changing the
local_irq_save callsite to local_irq_disable. It shouldn't be a problem,
since this is never expected
On 10/31/2012 09:26 PM, Tejun Heo wrote:
On Wed, Oct 31, 2012 at 09:24:06PM +0400, Glauber Costa wrote:
Note both in the commit messages.
I am sorry, but I can't find anything that may be related to this in the
commit messages. Can you be more specific ?
Eh.. 'd', missing there. I meant
On 10/31/2012 09:25 PM, Tejun Heo wrote:
Hello,
On Wed, Oct 31, 2012 at 09:19:51PM +0400, Glauber Costa wrote:
I don't see post_create failing as a huge problem. The natural
synchronization point would be right after post_create - then you can
definitely tell that it is online. Although
On 10/31/2012 09:25 PM, Tejun Heo wrote:
More proper names for these callbacks would be,
-allocate()
-online()
-offline()
-free()
I support the name change, btw.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to
On 11/01/2012 11:11 AM, Michael Wang wrote:
On 10/29/2012 06:49 PM, Glauber Costa wrote:
We currently provide lockdep annotation for kmalloc caches, and also
caches that have SLAB_DEBUG_OBJECTS enabled. The reason for this is that
we can quite frequently nest in the l3-list_lock lock, which
On 11/01/2012 01:10 PM, Michael Wang wrote:
On 11/02/2012 12:48 AM, Glauber Costa wrote:
On 11/01/2012 11:11 AM, Michael Wang wrote:
On 10/29/2012 06:49 PM, Glauber Costa wrote:
We currently provide lockdep annotation for kmalloc caches, and also
caches that have SLAB_DEBUG_OBJECTS enabled
.
For the time being, I am defining a new variant of THREADINFO_GFP, not
to mess with the other path. Once the slab is also tracked by memcg, we
can get rid of that flag.
Tested to successfully protect against :(){ :|: };:
Signed-off-by: Glauber Costa glom...@parallels.com
Acked-by: Frederic Weisbecker
-off-by: Glauber Costa glom...@parallels.com
Acked-by: Johannes Weiner han...@cmpxchg.org
Acked-by: Rik van Riel r...@redhat.com
Acked-by: Mel Gorman m...@csn.ul.ie
Acked-by: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
Acked-by: Michal Hocko mho...@suse.cz
CC: Christoph Lameter c...@linux.com
CC
calculation pointed out by Christoph Lameter ]
Signed-off-by: Suleiman Souhlal sulei...@google.com
Signed-off-by: Glauber Costa glom...@parallels.com
Acked-by: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
Acked-by: Michal Hocko mho...@suse.cz
Acked-by: Johannes Weiner han...@cmpxchg.org
Acked
cache creation, when we
allocate data using caches that are not necessarily created already.
[ v2: wrap the whole enqueue process, INIT_WORK can alloc memory ]
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: Michal
Souhlal sulei...@google.com
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: Michal Hocko mho...@suse.cz
CC: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
CC: Johannes Weiner han...@cmpxchg.org
CC: Tejun Heo t
this to be already set - which
memcg_kmem_register_cache will do - when we reach __kmem_cache_create()
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: Michal Hocko mho...@suse.cz
CC: Kamezawa Hiroyuki kamezawa.hir
cache.
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: Michal Hocko mho...@suse.cz
CC: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
CC: Johannes Weiner han...@cmpxchg.org
CC: Suleiman Souhlal sulei...@google.com
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: Michal Hocko mho...@suse.cz
CC: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
CC: Johannes Weiner han...@cmpxchg.org
CC: Suleiman Souhlal sulei...@google.com
CC
from the cache
code. Caches are only destroyed in process context, so we queue them
up for later processing in the general case.
[ v5: removed cachep backpointer ]
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC
draw from the user
counter, and can be bigger than a single page, as it is the case with
the stack (usually 2 pages) or some higher order slabs.
[ glom...@parallels.com: added a changelog ]
Signed-off-by: Suleiman Souhlal sulei...@google.com
Signed-off-by: Glauber Costa glom...@parallels.com
Acked
, nothing will be
propagated.
It can also happen that a root cache has its tunables updated during
normal system operation. In this case, we will propagate the change to
all caches that are already active.
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC
delayed_work to avoid calling verify_dead at every free]
[ v6: do not spawn worker if work is already pending ]
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: Michal Hocko mho...@suse.cz
CC: Kamezawa Hiroyuki
issues pointed out by JoonSoo Kim, revert the
cache synchronous allocation ]
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: Michal Hocko mho...@suse.cz
CC: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
CC
memcg_kmem_get_cache() before all the cache allocations.
[ v6: simplified kmalloc relay code ]
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: Michal Hocko mho...@suse.cz
CC: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: Michal Hocko mho...@suse.cz
CC: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
CC: Johannes Weiner han
kmem limited memcgs, a natural point for this
to happen is when we write to the limit. At that point, we already have
set_limit_mutex held, so that will become our natural synchronization
mechanism.
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC: Pekka
().
But because the later accesses per-zone info,
free_mem_cgroup_per_zone_info() needs to be moved as well. With that, we
are left with the per_cpu stats only. Better move it all.
Signed-off-by: Glauber Costa glom...@parallels.com
Tested-by: Greg Thelen gthe...@google.com
Acked-by: Michal Hocko mho...@suse.cz
.
[ v2: moved to idr/ida instead of redoing the indexes ]
[ v3: moved call to ida_init away from cgroup creation to fix a bug ]
[ v4: no longer using the index mechanism ]
[ v6: renamed memcg_css_id to memcg_cache_id, and return a proper id ]
Signed-off-by: Glauber Costa glom...@parallels.com
CC
. Because they are not annotated, lockdep will trigger.
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: David Rientjes rient...@google.com
CC: JoonSoo Kim js1...@gmail.com
---
mm/slab.c | 34
useful for people who just
want to track kernel memory usage.
Glauber Costa (27):
memcg: change defines to an enum
kmem accounting basic infrastructure
Add a __GFP_KMEMCG flag
memcg: kmem controller infrastructure
mm: Allocate kernel pages to the right memcg
res_counter: return
in kmem_accounted ]
Signed-off-by: Glauber Costa glom...@parallels.com
Acked-by: Michal Hocko mho...@suse.cz
Acked-by: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: Johannes Weiner han...@cmpxchg.org
CC: Suleiman Souhlal
For the kmem slab controller, we need to record some extra
information in the kmem_cache structure.
Signed-off-by: Glauber Costa glom...@parallels.com
Signed-off-by: Suleiman Souhlal sulei...@google.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: Michal Hocko
Signed-off-by: Glauber Costa glom...@parallels.com
Acked-by: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
Acked-by: Michal Hocko mho...@suse.cz
CC: Frederic Weisbecker fweis...@redhat.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: Johannes Weiner han
much better than a reference count decrease at every
operation.
[ v3: merged all lifecycle related patches in one ]
[ v5: changed memcg_kmem_dead's name ]
Signed-off-by: Glauber Costa glom...@parallels.com
Acked-by: Michal Hocko mho...@suse.cz
Acked-by: Kamezawa Hiroyuki kamezawa.hir
.
Because of that, when we destroy a memcg, we only make sure the
destruction will succeed by discounting the kmem charges from the user
charges when we try to empty the cgroup.
Signed-off-by: Glauber Costa glom...@parallels.com
Acked-by: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
Reviewed
: inverted test order to avoid a memcg_get leak,
free_accounted_pages simplification ]
[ v4: test for TIF_MEMDIE at newpage_charge ]
Signed-off-by: Glauber Costa glom...@parallels.com
Acked-by: Michal Hocko mho...@suse.cz
Acked-by: Mel Gorman mgor...@suse.de
Acked-by: Kamezawa Hiroyuki kamezawa.hir
it. This is the same
semantics as the atomic variables in the kernel.
Since the current return value is void, we don't need to worry about
anything breaking due to this change: nobody relied on that, and only
users appearing from now on will be checking this value.
Signed-off-by: Glauber Costa glom
code for kmemcg compiled out and core functions in
memcontrol.c, moved kmem code to the middle to avoid forward decls ]
Signed-off-by: Glauber Costa glom...@parallels.com
Acked-by: Michal Hocko mho...@suse.cz
Acked-by: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
CC: Christoph Lameter c
memory)
[ v4: make kmem files part of the main array;
do not allow limit to be set for non-empty cgroups ]
[ v5: cosmetic changes ]
[ v6: name changes and reorganizations, moved memcg_propagate_kmem ]
Signed-off-by: Glauber Costa glom...@parallels.com
Acked-by: Kamezawa Hiroyuki kamezawa.hir
This is just a cleanup patch for clarity of expression. In earlier
submissions, people asked it to be in a separate patch, so here it is.
[ v2: use named enum as type throughout the file as well ]
Signed-off-by: Glauber Costa glom...@parallels.com
Acked-by: Kamezawa Hiroyuki kamezawa.hir
On 11/02/2012 04:04 AM, Andrew Morton wrote:
On Thu, 1 Nov 2012 16:07:16 +0400
Glauber Costa glom...@parallels.com wrote:
Hi,
This work introduces the kernel memory controller for memcg. Unlike previous
submissions, this includes the whole controller, comprised of slab and stack
memory
On 11/02/2012 04:05 AM, Andrew Morton wrote:
On Thu, 1 Nov 2012 16:07:39 +0400
Glauber Costa glom...@parallels.com wrote:
This patch implements destruction of memcg caches. Right now,
only caches where our reference counter is the last remaining are
deleted. If there are any other
On 11/02/2012 04:05 AM, Andrew Morton wrote:
On Thu, 1 Nov 2012 16:07:27 +0400
Glauber Costa glom...@parallels.com wrote:
Because the ultimate goal of the kmem tracking in memcg is to track slab
pages as well, we can't guarantee that we'll always be able to point a
page to a particular
+
+#ifdef CONFIG_MEMCG_KMEM
+WARN_ON(cgroup_add_cftypes(mem_cgroup_subsys,
+ kmem_cgroup_files));
+#endif
+
Why not just make it part of mem_cgroup_files[]?
Thanks.
Done.
--
To unsubscribe from this list: send the line unsubscribe
On 09/21/2012 10:14 PM, Tejun Heo wrote:
Hello, Glauber.
On Tue, Sep 18, 2012 at 06:11:59PM +0400, Glauber Costa wrote:
+void memcg_register_cache(struct mem_cgroup *memcg, struct kmem_cache
*cachep)
+{
+int id = -1;
+
+if (!memcg)
+id = ida_simple_get(cache_types
On 09/22/2012 12:46 AM, Tejun Heo wrote:
Hello,
On Tue, Sep 18, 2012 at 06:11:54PM +0400, Glauber Costa wrote:
This is a followup to the previous kmem series. I divided them logically
so it gets easier for reviewers. But I believe they are ready to be merged
together (although we can do
On 09/22/2012 12:52 AM, Tejun Heo wrote:
Missed some stuff.
On Tue, Sep 18, 2012 at 06:12:00PM +0400, Glauber Costa wrote:
+static struct kmem_cache *memcg_create_kmem_cache(struct mem_cgroup *memcg,
+ struct kmem_cache *cachep
On 09/22/2012 12:40 AM, Tejun Heo wrote:
Hello, Glauber.
On Tue, Sep 18, 2012 at 06:12:09PM +0400, Glauber Costa wrote:
@@ -764,10 +777,21 @@ static struct kmem_cache
*memcg_create_kmem_cache(struct mem_cgroup *memcg,
goto out;
}
+/*
+ * Because the cache
On 09/21/2012 10:32 PM, Tejun Heo wrote:
On Tue, Sep 18, 2012 at 06:12:00PM +0400, Glauber Costa wrote:
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 04851bb..1cce5c3 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -339,6 +339,11 @@ struct mem_cgroup {
#ifdef CONFIG_INET
And the above description too makes me scratch my head quite a bit. I
can see what the patch is doing but can't understand the why.
* Why was it punting the freeing to workqueue anyway? ISTR something
about static_keys but my memory fails. What changed? Why don't we
need it anymore?
On 09/21/2012 11:59 PM, Tejun Heo wrote:
Hello,
On Tue, Sep 18, 2012 at 06:12:01PM +0400, Glauber Costa wrote:
+static void memcg_stop_kmem_account(void)
+{
+if (!current-mm)
+return;
+
+current-memcg_kmem_skip_account++;
+}
+
+static void memcg_resume_kmem_account
On 09/24/2012 04:41 PM, Christoph wrote:
On Sep 24, 2012, at 3:12, Glauber Costa glom...@parallels.com wrote:
On 09/21/2012 10:14 PM, Tejun Heo wrote:
The new caches will appear under /proc/slabinfo with the rest, with a
string appended that identifies the group.
There are f.e. meminfo
On 09/24/2012 05:42 PM, Christoph Lameter wrote:
On Mon, 24 Sep 2012, Glauber Costa wrote:
But that is orthogonal, isn't it? People will still expect to see it in
the old slabinfo file.
The current scheme for memory statistics is
/proc/meminfo contains global counters
/sys/devices
On 09/24/2012 05:56 PM, Christoph Lameter wrote:
On Mon, 24 Sep 2012, Glauber Costa wrote:
The reason I say it is orthogonal, is that people will still want to see
their caches in /proc/slabinfo, regardless of wherever else they'll be.
It was a requirement from Pekka in one of the first
On 09/24/2012 07:38 PM, Pekka Enberg wrote:
On 09/24/2012 05:56 PM, Christoph Lameter wrote:
On Mon, 24 Sep 2012, Glauber Costa wrote:
The reason I say it is orthogonal, is that people will still want to see
their caches in /proc/slabinfo, regardless of wherever else they'll
.
With bypassed kernel, we drop this down to 1.5 %, which starts to fall
in the acceptable range. More investigation is needed to see if we can
claim that last percent back, but I believe at last part of it should
be.
Glauber Costa (4):
memcg: provide root figures from system totals
memcg: make
part of it should
be.
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Michal Hocko mho...@suse.cz
CC: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
CC: Johannes Weiner han...@cmpxchg.org
CC: Mel Gorman mgor...@suse.de
CC: Andrew Morton a...@linux-foundation.org
---
include/linux
to do with the soft limit and max_usage.
Comments and suggestions appreciated.
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Michal Hocko mho...@suse.cz
CC: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
CC: Johannes Weiner han...@cmpxchg.org
CC: Mel Gorman mgor...@suse.de
CC: Andrew
the root as a common ancestor should lead to better
scalability for not-uncommon case of tasks in the cgroup being
node-bound to different nodes in NUMA systems.
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Michal Hocko mho...@suse.cz
CC: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
CC
.
flatmem case is a bit more complicated, so that one is left out for
the moment.
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Michal Hocko mho...@suse.cz
CC: Kamezawa Hiroyuki kamezawa.hir...@jp.fujitsu.com
CC: Johannes Weiner han...@cmpxchg.org
CC: Mel Gorman mgor...@suse.de
CC: Andrew Morton
if it happens to be passed (such as when duplicating a cache in
the kmem memcg patches)
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: David Rientjes rient...@google.com
---
include/linux/slab.h | 4
mm/slab_common.c
On 09/24/2012 09:56 PM, Tejun Heo wrote:
Hello, Glauber.
On Mon, Sep 24, 2012 at 12:46:35PM +0400, Glauber Costa wrote:
+#ifdef CONFIG_MEMCG_KMEM
+ /* Slab accounting */
+ struct kmem_cache *slabs[MAX_KMEM_CACHE_TYPES];
+#endif
Bah, 400 entry array in struct mem_cgroup. Can't we do
On 09/26/2012 04:46 AM, David Rientjes wrote:
On Tue, 25 Sep 2012, Christoph Lameter wrote:
No cache should ever pass those as a creation flags. We can just ignore
this bit if it happens to be passed (such as when duplicating a cache in
the kmem memcg patches)
Acked-by: Christoph Lameter
On 09/26/2012 01:02 AM, Andrew Morton wrote:
nomemcg : memcg compile disabled.
base : memcg enabled, patch not applied.
bypassed : memcg enabled, with patch applied.
basebypassed
User 109.12 105.64
System 1646.84 1597.98
Elapsed
On 09/26/2012 06:03 PM, Michal Hocko wrote:
On Tue 18-09-12 18:04:01, Glauber Costa wrote:
This patch adds the basic infrastructure for the accounting of the slab
caches. To control that, the following files are created:
* memory.kmem.usage_in_bytes
* memory.kmem.limit_in_bytes
On 09/26/2012 08:01 PM, Michal Hocko wrote:
On Wed 26-09-12 18:33:10, Glauber Costa wrote:
On 09/26/2012 06:03 PM, Michal Hocko wrote:
On Tue 18-09-12 18:04:01, Glauber Costa wrote:
[...]
@@ -4961,6 +5015,12 @@ mem_cgroup_create(struct cgroup *cont)
int cpu
On 09/26/2012 08:36 PM, Tejun Heo wrote:
Hello, Michal, Glauber.
On Wed, Sep 26, 2012 at 04:03:47PM +0200, Michal Hocko wrote:
Haven't we already discussed that a new memcg should inherit kmem_accounted
from its parent for use_hierarchy?
Say we have
root
|
A (kmem_accounted = 1,
On 09/26/2012 09:44 PM, Tejun Heo wrote:
Hello, Glauber.
On Wed, Sep 26, 2012 at 10:36 AM, Glauber Costa glom...@parallels.com wrote:
This was discussed multiple times. Our interest is to preserve existing
deployed setup, that were tuned in a world where kmem didn't exist.
Because we also
On 09/26/2012 10:01 PM, Tejun Heo wrote:
Hello,
On Wed, Sep 26, 2012 at 09:53:09PM +0400, Glauber Costa wrote:
I understand your trauma about over flexibility, and you know I share of
it. But I don't think there is any need to cap it here. Given kmem
accounted is perfectly hierarchical
On 09/26/2012 11:34 PM, Tejun Heo wrote:
Hello,
On Wed, Sep 26, 2012 at 10:56:09PM +0400, Glauber Costa wrote:
For me, it is the other way around: it makes perfect sense to have a
per-subtree selection of features where it doesn't hurt us, provided it
is hierarchical. For the mere fact
On 09/26/2012 11:56 PM, Tejun Heo wrote:
Hello,
On Wed, Sep 26, 2012 at 11:46:37PM +0400, Glauber Costa wrote:
Besides not being part of cgroup core, and respecting very much both
cgroups' and basic sanity properties, kmem is an actual feature that
some people want, and some people don't
On 09/27/2012 12:16 AM, Tejun Heo wrote:
On Thu, Sep 27, 2012 at 12:02:14AM +0400, Glauber Costa wrote:
But think in terms of functionality: This thing here is a lot more
similar to swap than use_hierarchy. Would you argue that memsw should be
per-root ?
I'm fairly sure you can make about
On 09/27/2012 02:10 AM, Tejun Heo wrote:
Hello, Glauber.
On Thu, Sep 27, 2012 at 01:24:40AM +0400, Glauber Costa wrote:
kmem_accounted is not a switch. It is an internal representation only.
The semantics, that we discussed exhaustively in San Diego, is that a
group that is not limited
On 09/27/2012 02:11 AM, Johannes Weiner wrote:
On Thu, Sep 27, 2012 at 12:02:14AM +0400, Glauber Costa wrote:
On 09/26/2012 11:56 PM, Tejun Heo wrote:
Hello,
On Wed, Sep 26, 2012 at 11:46:37PM +0400, Glauber Costa wrote:
Besides not being part of cgroup core, and respecting very much both
On 09/27/2012 02:42 AM, Tejun Heo wrote:
Hello, Glauber.
On Thu, Sep 27, 2012 at 02:29:06AM +0400, Glauber Costa wrote:
And then what? If you want a different behavior you need to go kill all
your services that are using memcg so you can get the behavior you want?
And if they happen
On 09/27/2012 03:08 AM, Tejun Heo wrote:
Hello, Glauber.
On Thu, Sep 27, 2012 at 02:54:11AM +0400, Glauber Costa wrote:
I don't. Much has been said in the past about the problem of sharing. A
lot of the kernel objects are shared by nature, this is pretty much
unavoidable. The answer we have
On 09/27/2012 05:16 AM, David Rientjes wrote:
On Wed, 26 Sep 2012, Glauber Costa wrote:
So the problem I am facing here is that when I am creating caches from
memcg, I would very much like to reuse their flags fields. They are
stored in the cache itself, so this is not a problem. But slab
On 09/26/2012 07:51 PM, Michal Hocko wrote:
On Tue 18-09-12 18:04:03, Glauber Costa wrote:
This patch introduces infrastructure for tracking kernel memory pages to
a given memcg. This will happen whenever the caller includes the flag
__GFP_KMEMCG flag, and the task belong to a memcg other than
Michal, Johannes, Kamezawa, what are your thoughts?
waiting! =)
Well, you guys generated a lot of discussion that one has to read
through, didn't you :P
We're quite good at that!
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to
On 09/27/2012 04:15 PM, Michal Hocko wrote:
On Wed 26-09-12 16:33:34, Tejun Heo wrote:
[...]
So, this seems properly crazy to me at the similar level of
use_hierarchy fiasco. I'm gonna NACK on this.
As I said: all use cases I particularly care about are covered by a
global switch.
I am
On 09/27/2012 04:40 PM, Michal Hocko wrote:
On Thu 27-09-12 16:20:55, Glauber Costa wrote:
On 09/27/2012 04:15 PM, Michal Hocko wrote:
On Wed 26-09-12 16:33:34, Tejun Heo wrote:
[...]
So, this seems properly crazy to me at the similar level of
use_hierarchy fiasco. I'm gonna NACK
On 09/27/2012 05:34 PM, Mel Gorman wrote:
On Tue, Sep 18, 2012 at 06:04:02PM +0400, Glauber Costa wrote:
This flag is used to indicate to the callees that this allocation is a
kernel allocation in process context, and should be accounted to
current's memcg. It takes numerical place
and handling is done from common code.
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: David Rientjes rient...@google.com
---
include/linux/slab_def.h | 10 ++
include/linux/slub_def.h | 11 +++
mm/slab.c
Hi,
This patch moves on with the slab caches commonization, by moving
the slabinfo processing to common code in slab_common.c. It only touches
slub and slab, since slob doesn't create that file, which is protected
by a Kconfig switch.
Enjoy,
Glauber Costa (4):
move slabinfo processing
time, but possibly a smaller
order in case of a retry. When we use it in slab_common.c we will be
talking about base values, but those functions would still have to
exist inside slub, so doing this we can just reuse them.
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c
-visible CONFIG_DEBUG_SLAB switch, we can move the header
printing to a common location.
Signed-off-by: Glauber Costa glom...@parallels.com
CC: Christoph Lameter c...@linux.com
CC: Pekka Enberg penb...@cs.helsinki.fi
CC: David Rientjes rient...@google.com
---
mm/slab.c| 24
This patch moves all the common machinery to slabinfo processing
to slab_common.c. We can do better by noticing that the output is
heavily common, and having the allocators to just provide finished
information about this. But after this first step, this can be done
easier.
Signed-off-by: Glauber
On 09/27/2012 06:49 PM, Tejun Heo wrote:
Hello, Mel.
On Thu, Sep 27, 2012 at 03:28:22PM +0100, Mel Gorman wrote:
In addition, how is userland supposed to know which
workload is shared kmem heavy or not?
By using a bit of common sense.
An application may not be able to figure this out
On 09/27/2012 07:07 PM, Christoph Lameter wrote:
On Thu, 27 Sep 2012, Glauber Costa wrote:
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -239,7 +239,23 @@ static void s_stop(struct seq_file *m, void *p)
static int s_show(struct seq_file *m, void *p)
{
-return slabinfo_show(m, p
1 - 100 of 1417 matches
Mail list logo