please determine what impact this has upon networking?
I expect Eric Dumazet, Dave Miller and Tom Herbert could suggest
testing approaches.
I can test it, but unfortunately I am unlikely to get to prepare a good
environment before Barcelona.
I know, however, that Greg Thelen was testing
On Wed, Nov 07 2012, Kirill A. Shutemov wrote:
On Wed, Nov 07, 2012 at 02:53:49AM -0800, Anton Vorontsov wrote:
Hi all,
This is the third RFC. As suggested by Minchan Kim, the API is much
simplified now (comparing to vmevent_fd):
- As well as Minchan, KOSAKI Motohiro didn't like the
On Mon, Mar 25 2013, Greg Thelen wrote:
On Mon, Mar 25 2013, Dave Chinner wrote:
On Mon, Mar 25, 2013 at 05:39:13PM -0700, Greg Thelen wrote:
On Mon, Mar 25 2013, Dave Chinner wrote:
On Mon, Mar 25, 2013 at 10:22:31AM -0700, Greg Thelen wrote:
Call cond_resched() from shrink_dentry_list
On Wed, Apr 10 2013, Andrew Morton wrote:
On Tue, 09 Apr 2013 17:37:20 -0700 Greg Thelen gthe...@google.com wrote:
Call cond_resched() in shrink_dcache_parent() to maintain
interactivity.
Before this patch:
void shrink_dcache_parent(struct dentry * parent)
{
while ((found
))
err(1, gettimeofday);
diff = (((double)t2.tv_sec * 100 + t2.tv_usec) -
((double)t1.tv_sec * 100 + t1.tv_usec));
printf(done: %g elapsed\n, diff/1e6);
return 0;
}
Signed-off-by: Greg Thelen gthe
On Mon, Mar 25 2013, Dave Chinner wrote:
On Mon, Mar 25, 2013 at 10:22:31AM -0700, Greg Thelen wrote:
Call cond_resched() from shrink_dentry_list() to preserve
shrink_dcache_parent() interactivity.
void shrink_dcache_parent(struct dentry * parent)
{
while ((found = select_parent
On Mon, Mar 25 2013, Dave Chinner wrote:
On Mon, Mar 25, 2013 at 05:39:13PM -0700, Greg Thelen wrote:
On Mon, Mar 25 2013, Dave Chinner wrote:
On Mon, Mar 25, 2013 at 10:22:31AM -0700, Greg Thelen wrote:
Call cond_resched() from shrink_dentry_list() to preserve
shrink_dcache_parent
On Tue, Feb 05 2013, Michal Hocko wrote:
On Tue 05-02-13 10:09:57, Greg Thelen wrote:
On Tue, Feb 05 2013, Michal Hocko wrote:
On Tue 05-02-13 08:48:23, Greg Thelen wrote:
On Tue, Feb 05 2013, Michal Hocko wrote:
On Tue 05-02-13 15:49:47, azurIt wrote:
[...]
Just to be sure
On Sun, Feb 10 2013, Anton Vorontsov wrote:
With this patch userland applications that want to maintain the
interactivity/memory allocation cost can use the new pressure level
notifications. The levels are defined like this:
The low level means that the system is reclaiming memory for new
On Tue, Feb 12 2013, Anton Vorontsov wrote:
Hi Greg,
Thanks for taking a look!
On Tue, Feb 12, 2013 at 10:42:51PM -0800, Greg Thelen wrote:
[...]
+static bool vmpressure_event(struct vmpressure *vmpr,
+ unsigned long s, unsigned long r)
+{
+ struct
On Tue, Feb 05 2013, Michal Hocko wrote:
On Tue 05-02-13 15:49:47, azurIt wrote:
[...]
Just to be sure - am i supposed to apply this two patches?
http://watchdog.sk/lkml/patches/
5-memcg-fix-1.patch is not complete. It doesn't contain the folloup I
mentioned in a follow up email. Here is
On Tue, Feb 05 2013, Michal Hocko wrote:
On Tue 05-02-13 08:48:23, Greg Thelen wrote:
On Tue, Feb 05 2013, Michal Hocko wrote:
On Tue 05-02-13 15:49:47, azurIt wrote:
[...]
Just to be sure - am i supposed to apply this two patches?
http://watchdog.sk/lkml/patches/
5-memcg-fix-1
On Wed, Feb 27 2013, Roman Gushchin wrote:
Hi, all!
I've implemented low limits for memory cgroups. The primary goal was to add
an ability
to protect some memory from reclaiming without using mlock(). A kind of soft
mlock().
I think this patch will be helpful when it's necessary to
further.
Signed-off-by: Greg Thelen gthe...@google.com
---
mm/shmem.c | 10 --
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/mm/shmem.c b/mm/shmem.c
index 5dd56f6..efd0b3a 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -2487,6 +2487,7 @@ static int shmem_remount_fs(struct
tmpfs -o mpol=interleave,mpol=interleave,size=100M nodev /mnt
umount /mnt
done
This patch fixes all of the above. I could have broken the patch into
three pieces but is seemed easier to review as one.
Signed-off-by: Greg Thelen gthe...@google.com
---
mm/shmem.c | 12 +---
1
On Fri, Jul 27 2012, Sha Zhengju wrote:
From: Sha Zhengju handai@taobao.com
This patch adds memcg routines to count dirty pages, which allows memory
controller
to maintain an accurate view of the amount of its dirty memory and can
provide some
info for users while group's direct
On Fri, Jul 27 2012, Sha Zhengju wrote:
From: Sha Zhengju handai@taobao.com
Similar to dirty page, we add per cgroup writeback pages accounting. The lock
rule still is:
mem_cgroup_begin_update_page_stat()
modify page WRITEBACK stat
mem_cgroup_update_page_stat()
On Mon, Aug 13 2012, Glauber Costa wrote:
Here's the dmesg splat.
Do you always get this report in the same way?
I managed to get a softirq inconsistency like yours, but the complaint
goes for a different lock.
Yes, I repeatedly get the same dmesg splat below.
Once I your 'execute the
On Mon, Aug 13 2012, Glauber Costa wrote:
+ WARN_ON(mem_cgroup_is_root(memcg));
+ size = (1 order) PAGE_SHIFT;
+ memcg_uncharge_kmem(memcg, size);
+ mem_cgroup_put(memcg);
Why do we need ref-counting here ? kmem res_counter cannot work as
reference ?
This is of course the pair of the
On Wed, Aug 15 2012, Christoph Lameter wrote:
On Wed, 15 Aug 2012, Michal Hocko wrote:
That is not what the kernel does, in general. We assume that if he wants
that memory and we can serve it, we should. Also, not all kernel memory
is unreclaimable. We can shrink the slabs, for instance.
On Wed, Aug 15 2012, Glauber Costa wrote:
On 08/14/2012 10:58 PM, Greg Thelen wrote:
On Mon, Aug 13 2012, Glauber Costa wrote:
+WARN_ON(mem_cgroup_is_root(memcg));
+size = (1 order) PAGE_SHIFT;
+memcg_uncharge_kmem(memcg, size);
+mem_cgroup_put(memcg
On Wed, Aug 15 2012, Glauber Costa wrote:
On 08/15/2012 08:38 PM, Greg Thelen wrote:
On Wed, Aug 15 2012, Glauber Costa wrote:
On 08/14/2012 10:58 PM, Greg Thelen wrote:
On Mon, Aug 13 2012, Glauber Costa wrote:
+ WARN_ON(mem_cgroup_is_root(memcg));
+ size = (1 order
On Wed, Aug 15 2012, Glauber Costa wrote:
On 08/15/2012 09:12 PM, Greg Thelen wrote:
On Wed, Aug 15 2012, Glauber Costa wrote:
On 08/15/2012 08:38 PM, Greg Thelen wrote:
On Wed, Aug 15 2012, Glauber Costa wrote:
On 08/14/2012 10:58 PM, Greg Thelen wrote:
On Mon, Aug 13 2012, Glauber
On Thu, Aug 09 2012, Glauber Costa wrote:
When a process tries to allocate a page with the __GFP_KMEMCG flag, the
page allocator will call the corresponding memcg functions to validate
the allocation. Tasks in the root memcg can always proceed.
To avoid adding markers to the page - and a
On Thu, Aug 09 2012, Glauber Costa wrote:
When a process tries to allocate a page with the __GFP_KMEMCG flag, the
page allocator will call the corresponding memcg functions to validate
the allocation. Tasks in the root memcg can always proceed.
To avoid adding markers to the page - and a
On Thu, Aug 09 2012, Glauber Costa wrote:
This patch introduces infrastructure for tracking kernel memory pages to
a given memcg. This will happen whenever the caller includes the flag
__GFP_KMEMCG flag, and the task belong to a memcg other than the root.
In memcontrol.h those functions are
On Thu, Aug 09 2012, Glauber Costa wrote:
This patch introduces infrastructure for tracking kernel memory pages to
a given memcg. This will happen whenever the caller includes the flag
__GFP_KMEMCG flag, and the task belong to a memcg other than the root.
In memcontrol.h those functions are
On Tue, Aug 21 2012, Michal Hocko wrote:
On Tue 21-08-12 13:22:09, Glauber Costa wrote:
On 08/21/2012 11:54 AM, Michal Hocko wrote:
[...]
But maybe you have a good use case for that?
Honestly, I don't. For my particular use case, this would be always on,
and end of story. I was
On Wed, Aug 22 2012, Glauber Costa wrote:
I am fine with either, I just need a clear sign from you guys so I don't
keep deimplementing and reimplementing this forever.
I would be for make it simple now and go with additional features later
when there is a demand for them. Maybe we will have
On Wed, Aug 22 2012, Glauber Costa wrote:
On 08/22/2012 01:50 AM, Greg Thelen wrote:
On Thu, Aug 09 2012, Glauber Costa wrote:
This patch introduces infrastructure for tracking kernel memory pages to
a given memcg. This will happen whenever the caller includes the flag
__GFP_KMEMCG flag
On Thu, Aug 23 2012, Glauber Costa wrote:
On 08/23/2012 03:23 AM, Greg Thelen wrote:
On Wed, Aug 22 2012, Glauber Costa wrote:
I am fine with either, I just need a clear sign from you guys so I don't
keep deimplementing and reimplementing this forever.
I would be for make it simple now
modify the enum without updating the dependent string
table.
Otherwise, looks good.
Reviewed-by: Greg Thelen gthe...@google.com
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
On Thu, Jun 28 2012, Sha Zhengju wrote:
From: Sha Zhengju handai@taobao.com
This patch adds memcg routines to count dirty pages, which allows memory
controller
to maintain an accurate view of the amount of its dirty memory and can
provide some
info for users while group's direct
On Thu, Jun 28 2012, Sha Zhengju wrote:
From: Sha Zhengju handai@taobao.com
Similar to dirty page, we add per cgroup writeback pages accounting. The lock
rule still is:
mem_cgroup_begin_update_page_stat()
modify page WRITEBACK stat
mem_cgroup_update_page_stat()
insertion.
Signed-off-by: Greg Thelen gthe...@google.com
---
kernel/cgroup.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index 01d5342..ece60d4 100644
--- a/kernel/cgroup.c
+++ b/kernel/cgroup.c
@@ -1394,6 +1394,7 @@ static void
from the wait queue.
Signed-off-by: Greg Thelen gthe...@google.com
Signed-off-by: Aaron Durbin adur...@google.com
---
kernel/cgroup.c | 11 ---
1 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index ece60d4..c79a969 100644
--- a/kernel
On Wed, Nov 28 2012, Tejun Heo wrote:
Hello, Greg.
On Wed, Nov 28, 2012 at 12:15:42PM -0800, Greg Thelen wrote:
@@ -4276,6 +4276,7 @@ static int cgroup_destroy_locked(struct cgroup *cgrp)
DEFINE_WAIT(wait);
struct cgroup_event *event, *tmp;
struct cgroup_subsys *ss
Use list_del_init() rather than list_del() to remove events from
cgrp-event_list. No functional change. This is just defensive
coding.
Signed-off-by: Greg Thelen gthe...@google.com
---
kernel/cgroup.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/cgroup.c b
from the wait queue.
Signed-off-by: Greg Thelen gthe...@google.com
Signed-off-by: Aaron Durbin adur...@google.com
---
kernel/cgroup.c | 11 ---
1 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/kernel/cgroup.c b/kernel/cgroup.c
index ece60d4..a0d75bb 100644
--- a/kernel
Move the cgroup_event_listener.c tool from Documentation into the new
tools/cgroup directory.
This change involves wiring cgroup_event_listener.c into the tools/
make system so that is can be built with:
$ make tools/cgroup
Signed-off-by: Greg Thelen gthe...@google.com
---
Documentation
.
Compiler warning found this:
$ gcc -Wall -O2 cgroup_event_listener.c
cgroup_event_listener.c: In function ‘main’:
cgroup_event_listener.c:109:2: warning: ‘ret’ may be used uninitialized in
this function [-Wuninitialized]
Signed-off-by: Greg Thelen gthe...@google.com
---
tools/cgroup
Since 628f423553 memcg: limit change shrink usage both
res_counter_write() and write_strategy_fn have been unused. This
patch deletes them both.
Signed-off-by: Greg Thelen gthe...@google.com
---
include/linux/res_counter.h |5 -
kernel/res_counter.c| 22
We ran some netperf comparisons measuring the overhead of enabling
CONFIG_MEMCG_KMEM with a kmem limit. Short answer: no regression seen.
This is a multiple machine (client,server) netperf test. Both client
and server machines were running the same kernel with the same
configuration.
A
On Tue, Dec 25 2012, Sha Zhengju wrote:
From: Sha Zhengju handai@taobao.com
Similar to dirty page, we add per cgroup writeback pages accounting. The lock
rule still is:
mem_cgroup_begin_update_page_stat()
modify page WRITEBACK stat
mem_cgroup_update_page_stat()
):
clear_page_dirty_for_io
cancel_dirty_page
To prevent AB/BA deadlock mentioned by Greg Thelen in previous version
(https://lkml.org/lkml/2012/7/30/227), we adjust the lock order:
-private_lock -- mapping-tree_lock -- memcg-move_lock.
So we need to make mapping
On Mon, Jan 07 2013, Tejun Heo wrote:
On Fri, Jan 04, 2013 at 01:05:18PM -0800, Greg Thelen wrote:
If the absolute-path-to-control-file command line parameter cannot
be opened, then cgroup_event_listener prints an error message and
tries to return an error. However, due to an uninitialized
On Mon, Nov 04 2013, Andrew Morton wrote:
On Sun, 27 Oct 2013 10:30:15 -0700 Greg Thelen gthe...@google.com wrote:
Tests various percpu operations.
Could you please take a look at the 32-bit build (this is i386):
lib/percpu_test.c: In function 'percpu_test_init':
lib/percpu_test.c:61
11,736,855 b31717 vmlinux.after
Signed-off-by: Greg Thelen gthe...@google.com
Signed-off-by: Ying Han ying...@google.com
---
Changelog since v3:
- Use ARRAY_SIZE(stats) rather than array terminator.
- rebased to latest linus/master (d8efd82) to incorporate 182446d08 cgroup:
pass around
=908 N0=552 N1=317 N2=39 N3=0
hierarchical_file=850 N0=549 N1=301 N2=0 N3=0
hierarchical_anon=58 N0=3 N1=16 N2=39 N3=0
hierarchical_unevictable=0 N0=0 N1=0 N2=0 N3=0
Signed-off-by: Ying Han ying...@google.com
Signed-off-by: Greg Thelen gthe...@google.com
---
Changelog since v3:
- push 'iter' local
for shmctl)
Signed-off-by: Greg Thelen gthe...@google.com
Cc: sta...@vger.kernel.org # 3.10.17+ 3.11.6+
---
ipc/shm.c | 28 +++-
1 file changed, 23 insertions(+), 5 deletions(-)
diff --git a/ipc/shm.c b/ipc/shm.c
index d69739610fd4..0bdf21c6814e 100644
--- a/ipc/shm.c
+++ b/ipc
On Wed, May 08 2013, Seth Jennings wrote:
debugfs currently lack the ability to create attributes
that set/get atomic_t values.
This patch adds support for this through a new
debugfs_create_atomic_t() function.
Signed-off-by: Seth Jennings sjenn...@linux.vnet.ibm.com
Acked-by: Greg
---
From c1f43ef0f4cc42fb2ecaeaca71bd247365e3521e Mon Sep 17 00:00:00 2001
From: Greg Thelen gthe...@google.com
Date: Fri, 25 Oct 2013 21:59:57 -0700
Subject: [PATCH] memcg: remove incorrect underflow check
When a memcg is deleted mem_cgroup_reparent_charges() moves charged
memory to the parent memcg
);
Signed-off-by: Greg Thelen gthe...@google.com
---
arch/x86/include/asm/percpu.h | 3 ++-
include/linux/percpu.h| 8
lib/percpu_test.c | 2 +-
3 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
Tests various percpu operations.
Enable with CONFIG_PERCPU_TEST=m.
Signed-off-by: Greg Thelen gthe...@google.com
---
lib/Kconfig.debug | 9
lib/Makefile | 2 +
lib/percpu_test.c | 138 ++
3 files changed, 149 insertions
)
admitting that __this_cpu_add/sub() doesn't work with unsigned adjustments. But
I felt like fixing the core services to prevent this in the future.
Greg Thelen (3):
percpu counter: test module
percpu counter: cast this_cpu_sub() adjustment
memcg: use __this_cpu_sub to decrement stats
arch
than adding its
negation. This only works with the percpu counter: cast
this_cpu_sub() adjustment patch which fixes this_cpu_sub().
Signed-off-by: Greg Thelen gthe...@google.com
---
mm/memcontrol.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/memcontrol.c b/mm
On Sun, Oct 27 2013, Tejun Heo wrote:
On Sun, Oct 27, 2013 at 05:04:29AM -0700, Andrew Morton wrote:
On Sun, 27 Oct 2013 07:22:55 -0400 Tejun Heo t...@kernel.org wrote:
We probably want to cc stable for this and the next one. How should
these be routed? I can take these through percpu
On Sun, Oct 27 2013, Greg Thelen wrote:
this_cpu_sub() is implemented as negation and addition.
This patch casts the adjustment to the counter type before negation to
sign extend the adjustment. This helps in cases where the counter
type is wider than an unsigned adjustment. An alternative
now
referring to per cpu operations rather than per cpu counters.
- move small test code update from patch 2 to patch 1 (where the test is
introduced).
Greg Thelen (3):
percpu: add test module for various percpu operations
percpu: fix this_cpu_sub() subtrahend casting for unsigneds
memcg
);
Signed-off-by: Greg Thelen gthe...@google.com
Acked-by: Tejun Heo t...@kernel.org
---
arch/x86/include/asm/percpu.h | 3 ++-
include/linux/percpu.h| 8
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/arch/x86/include/asm/percpu.h b/arch/x86/include/asm/percpu.h
index
Tests various percpu operations.
Enable with CONFIG_PERCPU_TEST=m.
Signed-off-by: Greg Thelen gthe...@google.com
Acked-by: Tejun Heo t...@kernel.org
---
lib/Kconfig.debug | 9
lib/Makefile | 2 +
lib/percpu_test.c | 138 ++
3
than adding its
negation. This only works once percpu: fix this_cpu_sub() subtrahend
casting for unsigneds is applied to fix this_cpu_sub().
Signed-off-by: Greg Thelen gthe...@google.com
Acked-by: Tejun Heo t...@kernel.org
---
mm/memcontrol.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion
11,627,372 b16b6c vmlinux.after
Signed-off-by: Greg Thelen gthe...@google.com
Signed-off-by: Ying Han ying...@google.com
---
Changelog since v2:
- rebased to v3.11
- updated commit description
mm/memcontrol.c | 57 +++--
1 file changed, 23 insertions
hierarchical_file=14 N0=0 N1=0 N2=14 N3=0
hierarchical_anon=59 N0=0 N1=41 N2=18 N3=0
hierarchical_unevictable=0 N0=0 N1=0 N2=0 N3=0
Signed-off-by: Ying Han ying...@google.com
Signed-off-by: Greg Thelen gthe...@google.com
---
Changelog since v2:
- reworded Documentation/cgroup/memory.txt
- updated
threshold notifications in v2.6.34-rc1-116-g2e72b6347c94 memcg:
implement memory thresholds
Signed-off-by: Greg Thelen gthe...@google.com
---
mm/memcontrol.c | 8 +++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 0878ff7..aa44621 100644
On Fri, Jun 06 2014, Michal Hocko mho...@suse.cz wrote:
Some users (e.g. Google) would like to have stronger semantic than low
limit offers currently. The fallback mode is not desirable and they
prefer hitting OOM killer rather than ignoring low limit for protected
groups. There are other
to be as compacted as possible at the end of the zone.
Reported-by: Greg Thelen gthe...@google.com
What did Greg actually report? IOW, what if any observable problem is
being fixed here?
I detected the problem at runtime seeing that ext4 metadata pages (esp
the ones read by sbi-s_group_desc
On Tue, Jun 10 2014, Johannes Weiner han...@cmpxchg.org wrote:
On Mon, Jun 09, 2014 at 03:52:51PM -0700, Greg Thelen wrote:
On Fri, Jun 06 2014, Michal Hocko mho...@suse.cz wrote:
Some users (e.g. Google) would like to have stronger semantic than low
limit offers currently
On Tue, May 13 2014, Michal Hocko mho...@suse.cz wrote:
force_empty has been introduced primarily to drop memory before it gets
reparented on the group removal. This alone doesn't sound fully
justified because reparented pages which are not in use can be reclaimed
also later when there is a
On Wed, May 28 2014, Johannes Weiner han...@cmpxchg.org wrote:
On Wed, May 28, 2014 at 04:21:44PM +0200, Michal Hocko wrote:
On Wed 28-05-14 09:49:05, Johannes Weiner wrote:
On Wed, May 28, 2014 at 02:10:23PM +0200, Michal Hocko wrote:
Hi Andrew, Johannes,
On Mon 28-04-14 14:26:41,
On Mon, Feb 03 2014, Michal Hocko wrote:
On Thu 30-01-14 16:28:27, Greg Thelen wrote:
On Thu, Jan 30 2014, Michal Hocko wrote:
On Wed 29-01-14 11:08:46, Greg Thelen wrote:
[...]
The series looks useful. We (Google) have been using something similar.
In practice such a low_limit
-introducing the old test within
the racy critical sections.
This patch introduces ipc_valid_object() to consolidate the way we cope with
IPC_RMID races by using the same abstraction across the API implementation.
Signed-off-by: Rafael Aquini aqu...@redhat.com
Acked-by: Greg Thelen gthe
On Wed, Dec 11 2013, Michal Hocko wrote:
Hi,
previous discussions have shown that soft limits cannot be reformed
(http://lwn.net/Articles/555249/). This series introduces an alternative
approach to protecting memory allocated to processes executing within
a memory cgroup controller. It is
On Thu, Jan 30 2014, Michal Hocko wrote:
On Wed 29-01-14 11:08:46, Greg Thelen wrote:
[...]
The series looks useful. We (Google) have been using something similar.
In practice such a low_limit (or memory guarantee), doesn't nest very
well.
Example:
- parent_memcg: limit 500, low_limit
On Wed, Mar 26 2014, Vladimir Davydov vdavy...@parallels.com wrote:
We don't track any random page allocation, so we shouldn't track kmalloc
that falls back to the page allocator.
This seems like a change which will leads to confusing (and arguably
improper) kernel behavior. I prefer the
On Thu, Mar 27, 2014 at 12:37 AM, Vladimir Davydov
vdavy...@parallels.com wrote:
Hi Greg,
On 03/27/2014 08:31 AM, Greg Thelen wrote:
On Wed, Mar 26 2014, Vladimir Davydov vdavy...@parallels.com wrote:
We don't track any random page allocation, so we shouldn't track kmalloc
that falls back
On Mon, Apr 28 2014, Roman Gushchin kl...@yandex-team.ru wrote:
28.04.2014, 16:27, Michal Hocko mho...@suse.cz:
The series is based on top of the current mmotm tree. Once the series
gets accepted I will post a patch which will mark the soft limit as
deprecated with a note that it will be
On Tue, Apr 01 2014, Vladimir Davydov vdavy...@parallels.com wrote:
Currently to allocate a page that should be charged to kmemcg (e.g.
threadinfo), we pass __GFP_KMEMCG flag to the page allocator. The page
allocated is then to be freed by free_memcg_kmem_pages. Apart from
looking
On Tue, Apr 01 2014, Davidlohr Bueso davidl...@hp.com wrote:
On Tue, 2014-04-01 at 19:56 -0400, KOSAKI Motohiro wrote:
Ah-hah, that's interesting info.
Let's make the default 64GB?
64GB is infinity at that time, but it no longer near infinity today. I
like
very large or total
of misaccounting an allocation
going from one memcg's cache to another memcg, because now we always
charge slabs against the memcg the cache belongs to. That's why this
patch removes the big comment to memcg_kmem_get_cache.
Signed-off-by: Vladimir Davydov vdavy...@parallels.com
Acked-by: Greg Thelen gthe
the default value, users can potentially DoS the
system, or at least cause excessive swapping if not manually set, but
then again the same goes for anon mem... so do we care?
(2014/04/02 10:08), Greg Thelen wrote:
At least when there's an egregious anon leak the oom killer has the
power
One comment nit below, otherwise looks good to me.
Acked-by: Greg Thelen gthe...@google.com
Cc: Johannes Weiner han...@cmpxchg.org
Cc: Michal Hocko mho...@suse.cz
Cc: Glauber Costa glom...@gmail.com
Cc: Christoph Lameter c...@linux-foundation.org
Cc: Pekka Enberg penb...@kernel.org
On Wed, Jul 9, 2014 at 9:36 AM, Vladimir Davydov vdavy...@parallels.com wrote:
Hi Tim,
On Wed, Jul 09, 2014 at 08:08:07AM -0700, Tim Hockin wrote:
How is this different from RLIMIT_AS? You specifically mentioned it
earlier but you don't explain how this is different.
The main difference is
6b208e3f6e35 (mm: memcg: remove unused node/section info from
pc-flags) deleted the lookup_cgroup_page() function but left a
prototype for it.
Kill the vestigial prototype.
Signed-off-by: Greg Thelen gthe...@google.com
---
include/linux/page_cgroup.h | 1 -
1 file changed, 1 deletion(-)
diff
aggressive shrinking of dm bufio objects.
If the uninitialized dm_bufio_client.shrinker.flags contains
SHRINKER_NUMA_AWARE then shrink_slab() would call the dm shrinker for
each numa node rather than just once. This has been broken since 3.12.
Signed-off-by: Greg Thelen gthe...@google.com
On Thu, Aug 07 2014, Johannes Weiner wrote:
On Thu, Aug 07, 2014 at 03:08:22PM +0200, Michal Hocko wrote:
On Mon 04-08-14 17:14:54, Johannes Weiner wrote:
Instead of passing the request size to direct reclaim, memcg just
manually loops around reclaiming SWAP_CLUSTER_MAX pages until the
On Fri, Sep 19 2014, Johannes Weiner wrote:
In a memcg with even just moderate cache pressure, success rates for
transparent huge page allocations drop to zero, wasting a lot of
effort that the allocator puts into assembling these pages.
The reason for this is that the memcg reclaim code
On Tue, Sep 16 2014, Vladimir Davydov wrote:
Hi Suleiman,
On Mon, Sep 15, 2014 at 12:13:33PM -0700, Suleiman Souhlal wrote:
On Mon, Sep 15, 2014 at 3:44 AM, Vladimir Davydov
vdavy...@parallels.com wrote:
Hi,
I'd like to discuss downsides of the kmem accounting part of the memory
On Tue, Sep 23 2014, Johannes Weiner wrote:
On Mon, Sep 22, 2014 at 10:52:50PM -0700, Greg Thelen wrote:
On Fri, Sep 19 2014, Johannes Weiner wrote:
In a memcg with even just moderate cache pressure, success rates for
transparent huge page allocations drop to zero, wasting a lot
On Fri, Oct 31 2014, Junjie Mao wrote:
When choosing a random address, the current implementation does not take into
account the reversed space for .bss and .brk sections. Thus the relocated
kernel
may overlap other components in memory. Here is an example of the overlap
from a
x86_64
On Mon, Nov 17 2014, Greg Thelen wrote:
[...]
Given that bss and brk are nobits (i.e. only ALLOC) sections, does
file_offset make sense as a load address. This fails with gold:
$ git checkout v3.18-rc5
$ make # with gold
[...]
..bss and .brk lack common file offset
..bss and .brk lack
On Wed, Feb 04 2015, Tejun Heo wrote:
Hello,
On Tue, Feb 03, 2015 at 03:30:31PM -0800, Greg Thelen wrote:
If a machine has several top level memcg trying to get some form of
isolation (using low, min, soft limit) then a shared libc will be
moved to the root memcg where it's not protected
On Fri, Feb 6, 2015 at 6:17 AM, Tejun Heo t...@kernel.org wrote:
Hello, Greg.
On Thu, Feb 05, 2015 at 04:03:34PM -0800, Greg Thelen wrote:
So this is a system which charges all cgroups using a shared inode
(recharge on read) for all resident pages of that shared inode. There's
only
On Thu, Feb 05 2015, Tejun Heo wrote:
Hello, Greg.
On Wed, Feb 04, 2015 at 03:51:01PM -0800, Greg Thelen wrote:
I think the linux-next low (and the TBD min) limits also have the
problem for more than just the root memcg. I'm thinking of a 2M file
shared between C and D below. The file
On Thu, Feb 05 2015, Tejun Heo wrote:
Hey,
On Thu, Feb 05, 2015 at 02:05:19PM -0800, Greg Thelen wrote:
A
+-B(usage=2M lim=3M min=2M hosted_usage=2M)
+-C (usage=0 lim=2M min=1M shared_usage=2M)
+-D (usage=0 lim=2M min=1M shared_usage=2M)
\-E (usage=0
On Mon, Feb 2, 2015 at 11:46 AM, Tejun Heo t...@kernel.org wrote:
Hey,
On Mon, Feb 02, 2015 at 10:26:44PM +0300, Konstantin Khlebnikov wrote:
Keeping shared inodes in common ancestor is reasonable.
We could schedule asynchronous moving when somebody opens or mmaps
inode from outside of its
On Tue, Feb 10, 2015 at 6:19 PM, Tejun Heo t...@kernel.org wrote:
Hello, again.
On Sat, Feb 07, 2015 at 09:38:39AM -0500, Tejun Heo wrote:
If we can argue that memcg and blkcg having different views is
meaningful and characterize and justify the behaviors stemming from
the deviation, sure,
On Wed, Feb 11, 2015 at 12:33 PM, Tejun Heo t...@kernel.org wrote:
[...]
page count to throttle based on blkcg's bandwidth. Note: memcg
doesn't yet have dirty page counts, but several of us have made
attempts at adding the counters. And it shouldn't be hard to get them
merged.
Can you
cgroup-name)
Signed-off-by: Greg Thelen gthe...@google.com
---
mm/memcontrol.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 851924fa5170..683b4782019b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1477,9 +1477,9 @@ void
Use BUILD_BUG_ON() to compile assert that memcg string tables are in
sync with corresponding enums. There aren't currently any issues with
these tables. This is just defensive.
Signed-off-by: Greg Thelen gthe...@google.com
---
mm/memcontrol.c | 4
1 file changed, 4 insertions(+)
diff
1 - 100 of 390 matches
Mail list logo