On Tue 17-12-13 11:48:20, Glauber Costa wrote:
On Mon, Dec 16, 2013 at 8:47 PM, Michal Hocko mho...@suse.cz wrote:
On Sat 14-12-13 12:15:33, Vladimir Davydov wrote:
The mem_cgroup structure contains nr_node_ids pointers to
mem_cgroup_per_node objects, not the objects themselves.
Ouch!
On Fri, 2013-11-22 at 12:20 +0400, Vladimir Davydov wrote:
The patch fixes the following lockdep warning, which is 100%
reproducible on network restart:
==
[ INFO: possible circular locking dependency detected ]
3.12.0+ #47 Tainted: GF
On Fri, 2013-11-22 at 12:20 +0400, Vladimir Davydov wrote:
On e1000_down(), we should ensure every asynchronous work is canceled
before proceeding. Since the watchdog_task can schedule other works
apart from itself, it should be stopped first, but currently it is
stopped after the reset_task.
On Mon, Dec 2, 2013 at 10:15 PM, Michal Hocko mho...@suse.cz wrote:
[CCing Glauber - please do so in other posts for kmem related changes]
On Mon 02-12-13 17:08:13, Vladimir Davydov wrote:
The KMEM_ACCOUNTED_ACTIVATED was introduced by commit a8964b9b (memcg:
use static branches when code not
On Mon, Dec 2, 2013 at 10:51 PM, Michal Hocko mho...@suse.cz wrote:
On Mon 02-12-13 22:26:48, Glauber Costa wrote:
On Mon, Dec 2, 2013 at 10:15 PM, Michal Hocko mho...@suse.cz wrote:
[CCing Glauber - please do so in other posts for kmem related changes]
On Mon 02-12-13 17:08:13, Vladimir
On Mon, Dec 2, 2013 at 11:21 PM, Vladimir Davydov
vdavy...@parallels.com wrote:
On 12/02/2013 10:26 PM, Glauber Costa wrote:
On Mon, Dec 2, 2013 at 10:15 PM, Michal Hocko mho...@suse.cz wrote:
[CCing Glauber - please do so in other posts for kmem related changes]
On Mon 02-12-13 17:08:13,
Hi, Glauber
Hi.
In memcg_update_kmem_limit() we do the whole process of limit
initialization under a mutex so the situation we need protection from in
tcp_update_limit() is impossible. BTW once set, the 'activated' flag is
never cleared and never checked alone, only along with the 'active'
Could you do something clever with just one flag? Probably yes. But I
doubt it would
be that much cleaner, this is just the way that patching sites work.
Thank you for spending your time to listen to me.
Don't worry! I thank you for carrying this forward.
Let me try to explain what is
Please note that in contrast to previous versions this patch-set implements
slab shrinking only when we hit the user memory limit so that kmem allocations
will still fail if we are below the user memory limit, but close to the kmem
limit. This is, because the implementation of kmem-only
On Mon, Dec 9, 2013 at 12:05 PM, Vladimir Davydov
vdavy...@parallels.com wrote:
I need to move this up a bit, and I am doing in a separate patch just to
reduce churn in the patch that needs it.
Signed-off-by: Vladimir Davydov vdavy...@parallels.com
Reviewed-by: Glauber Costa glom...@openvz.org
On Mon, Dec 9, 2013 at 12:05 PM, Vladimir Davydov
vdavy...@parallels.com wrote:
This reduces the indentation level of do_try_to_free_pages() and removes
extra loop over all eligible zones counting the number of on-LRU pages.
Looks correct to me.
___
On Mon, Dec 9, 2013 at 12:05 PM, Vladimir Davydov
vdavy...@parallels.com wrote:
From: Glauber Costa glom...@openvz.org
During the past weeks, it became clear to us that the shrinker interface
It has been more than a few weeks by now =)
___
Devel
On Tue, Dec 10, 2013 at 3:50 PM, Vladimir Davydov
vdavy...@parallels.com wrote:
On 12/10/2013 08:18 AM, Dave Chinner wrote:
On Mon, Dec 09, 2013 at 12:05:54PM +0400, Vladimir Davydov wrote:
From: Glauber Costa glom...@openvz.org
In very low free kernel memory situations, it may be the case
On Tue, Dec 10, 2013 at 5:59 PM, Vladimir Davydov
vdavy...@parallels.com wrote:
Hi,
Looking through the per-memcg kmem_cache initialization code, I have a
bad feeling that it is prone to a race. Before getting to fixing it, I'd
like to ensure this race is not only in my imagination. Here it
OK, as far as I can tell, this is introducing a per-node, per-memcg
LRU lists. Is that correct?
If so, then that is not what Glauber and I originally intended for
memcg LRUs. per-node LRUs are expensive in terms of memory and cross
multiplying them by the number of memcgs in a system was not
On Mon, Dec 16, 2013 at 8:47 PM, Michal Hocko mho...@suse.cz wrote:
On Sat 14-12-13 12:15:33, Vladimir Davydov wrote:
The mem_cgroup structure contains nr_node_ids pointers to
mem_cgroup_per_node objects, not the objects themselves.
Ouch! This is 2k per node which is wasted. What a shame I
16 matches
Mail list logo