sary, because these functions aren't
hot paths. Neither do I think it makes the code look better. Anyway,
it's rather a matter of personal preference, and the patch looks correct
to me, so
Reviewed-by: Vladimir Davydov
>
> This also eliminates the need for dummy functions because the ca
Johannes Weiner
Reviewed-by: Vladimir Davydov
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
;).
>
> To avoid adding yet more pointless memory+swap accounting with the
> socket memory support in unified hierarchy, disable the counter
> altogether when in unified hierarchy mode.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Vladimir Davydov
--
To unsubscribe from t
heir limit, and the child should enter socket pressure.
>
> Signed-off-by: Johannes Weiner
Reviewed-by: Vladimir Davydov
For the record: it was broken by commit 3e32cb2e0a12 ("mm: memcontrol:
lockless page counters").
--
To unsubscribe from this list: send the line "unsubsc
On Thu, Nov 12, 2015 at 06:41:21PM -0500, Johannes Weiner wrote:
...
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index a4507ec..e4f5b3c 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -411,6 +411,10 @@ static unsigned long shrink_slab(gfp_t gfp_mask, int nid,
> struct shrinker *shrinker;
>
On Thu, Nov 12, 2015 at 06:41:20PM -0500, Johannes Weiner wrote:
> A later patch will need this symbol in files other than memcontrol.c,
> so export it now and replace mem_cgroup_root_css at the same time.
>
> Signed-off-by: Johannes Weiner
> Acked-by: Michal Hocko
Review
On Thu, Nov 12, 2015 at 05:17:41PM +0100, Michal Hocko wrote:
> On Tue 10-11-15 21:34:05, Vladimir Davydov wrote:
> > Currently, if we want to account all objects of a particular kmem cache,
> > we have to pass __GFP_ACCOUNT to each kmem_cache_alloc call, which is
> > inc
On Wed, Nov 11, 2015 at 10:54:50AM -0500, Tejun Heo wrote:
> Hello,
>
> On Tue, Nov 10, 2015 at 09:54:01PM +0300, Vladimir Davydov wrote:
> > > Am I correct in thinking that we should eventually be able to removed
> > > __GFP_ACCOUNT and that only caches explicit
On Tue, Nov 10, 2015 at 01:38:08PM -0500, Tejun Heo wrote:
> On Tue, Nov 10, 2015 at 09:34:05PM +0300, Vladimir Davydov wrote:
> > Currently, if we want to account all objects of a particular kmem cache,
> > we have to pass __GFP_ACCOUNT to each kmem_cache_alloc call, which is
&
" approach (simply because it did not account everything in
fact).
Signed-off-by: Vladimir Davydov
---
arch/powerpc/platforms/cell/spufs/inode.c | 2 +-
drivers/staging/lustre/lustre/llite/super25.c | 3 ++-
fs/9p/v9fs.c | 2 +-
fs/adfs/super.c
accounting is not used (only compiled in).
Suggested-by: Tejun Heo
Signed-off-by: Vladimir Davydov
---
include/linux/memcontrol.h | 15 +++
include/linux/slab.h | 5 +
mm/memcontrol.c| 8 +++-
mm/slab.h | 5 +++--
mm/slab_common.c
introduced later in the series.
Signed-off-by: Vladimir Davydov
Conflicts:
include/linux/memcontrol.h
---
include/linux/gfp.h| 2 --
include/linux/memcontrol.h | 2 --
mm/kmemleak.c | 3 +--
3 files changed, 1 insertion(+), 6 deletions(-)
diff --git a/include/linux
This patch makes vmalloc family functions allocate vmalloc area pages
with alloc_kmem_pages so that if __GFP_ACCOUNT is set they will be
accounted to memcg. This is needed, at least, to account alloc_fdmem
allocations.
Signed-off-by: Vladimir Davydov
---
mm/vmalloc.c | 6 +++---
1 file changed
following patches will mark several kmem allocations that are known to
be easily triggered from userspace and therefore should be accounted to
memcg.
Signed-off-by: Vladimir Davydov
---
include/linux/gfp.h| 4
include/linux/memcontrol.h | 2 ++
mm/page_alloc.c| 3 ++-
3 files
6) and it still misses many object types. However, accounting only
those objects should be a satisfactory approximation of the behavior we
used to have for most sane workloads.
Changes in v2:
- add and use SLAB_ACCOUNT flag (Tejun)
v1: http://marc.info/?l=linux-mm&m=144692684713032&w=2
Th
introduced later in the series.
Signed-off-by: Vladimir Davydov
---
fs/kernfs/dir.c | 9 +
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
index 91e004518237..0239a0a76ed5 100644
--- a/fs/kernfs/dir.c
+++ b/fs/kernfs/dir.c
@@ -541,14 +541,7
On Tue, Nov 10, 2015 at 05:50:04PM +0900, Naoya Horiguchi wrote:
> PageIdle is exported in include/uapi/linux/kernel-page-flags.h, so let's
> make page-types.c tool handle it.
>
> Signed-off-by: Naoya Horiguchi
Reviewed-by: Vladimir Davydov
--
To unsubscribe from this li
On Mon, Nov 09, 2015 at 03:39:55PM +0100, Michal Hocko wrote:
> On Sat 07-11-15 23:07:09, Vladimir Davydov wrote:
> > This patch marks those kmem allocations that are known to be easily
> > triggered from userspace as __GFP_ACCOUNT, which makes them accounted to
> > memcg. Fo
On Mon, Nov 09, 2015 at 03:30:53PM -0500, Tejun Heo wrote:
...
> Hmm can't we simply merge among !SLAB_ACCOUNT and SLAB_ACCOUNT
> kmem_caches within themselves? I don't think we'd be losing anything
> by restricting merge at that level. For anything to be tagged
> SLAB_ACCOUNT, it has to have
On Mon, Nov 09, 2015 at 02:32:53PM -0500, Tejun Heo wrote:
> On Mon, Nov 09, 2015 at 10:27:47PM +0300, Vladimir Davydov wrote:
> > Of course, we could rework slab merging so that kmem_cache_create
> > returned a new dummy cache even if it was actually merged. Such a cache
> &g
On Mon, Nov 09, 2015 at 01:54:01PM -0500, Tejun Heo wrote:
> On Mon, Nov 09, 2015 at 09:28:40PM +0300, Vladimir Davydov wrote:
> > > I am _all_ for this semantic I am just not sure what to do with the
> > > legacy kmem controller. Can we change its semantic? If we cannot do
legacy API can cope
> with that.
>
> Anyway if we go this way then I think the kmem accounting would be safe
> to be enabled by default with the cgroup2.
>
> > Thanks,
> >
> > Vladimir Davydov (5):
> > Revert "kernfs: do not account ino_ida allocatio
following patches will mark several kmem allocations that are known to
be easily triggered from userspace and therefore should be accounted to
memcg.
Signed-off-by: Vladimir Davydov
---
include/linux/gfp.h| 4
include/linux/memcontrol.h | 2 ++
mm/page_alloc.c| 3 ++-
3 files
within bounds. Malevolent users will be able to
breach the limit, but this was possible even with the former "account
everything" approach (simply because it did not account everything in
fact).
Signed-off-by: Vladimir Davydov
---
arch/powerpc/platforms/cell/spufs/inode.c | 2 +-
drivers/stag
This patch makes vmalloc family functions allocate vmalloc area pages
with alloc_kmem_pages so that if __GFP_ACCOUNT is set they will be
accounted to memcg. This is needed, at least, to account alloc_fdmem
allocations.
Signed-off-by: Vladimir Davydov
---
mm/vmalloc.c | 6 +++---
1 file changed
introduced later in the series.
Signed-off-by: Vladimir Davydov
Conflicts:
include/linux/memcontrol.h
---
include/linux/gfp.h| 2 --
include/linux/memcontrol.h | 2 --
mm/kmemleak.c | 3 +--
3 files changed, 1 insertion(+), 6 deletions(-)
diff --git a/include/linux
introduced later in the series.
Signed-off-by: Vladimir Davydov
---
fs/kernfs/dir.c | 9 +
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/fs/kernfs/dir.c b/fs/kernfs/dir.c
index 91e004518237..0239a0a76ed5 100644
--- a/fs/kernfs/dir.c
+++ b/fs/kernfs/dir.c
@@ -541,14 +541,7
accounting only
those objects should be a satisfactory approximation of the behavior we
used to have for most sane workloads.
Thanks,
Vladimir Davydov (5):
Revert "kernfs: do not account ino_ida allocations to memcg"
Revert "gfp: add __GFP_NOACCOUNT"
memcg: only
intended.
Signed-off-by: Vladimir Davydov
---
include/linux/rmap.h | 8
mm/page_idle.c | 62 -
mm/rmap.c| 110 ++-
3 files changed, 81 insertions(+), 99 deletions(-)
diff --git a/include/linux
On Thu, Nov 05, 2015 at 03:55:22PM -0500, Johannes Weiner wrote:
> On Thu, Nov 05, 2015 at 03:40:02PM +0100, Michal Hocko wrote:
...
> > 3) keep only some (safe) cache types enabled by default with the current
> >failing semantic and require an explicit enabling for the complete
> >kmem acc
On Thu, Nov 05, 2015 at 02:58:38PM +0200, Kirill A. Shutemov wrote:
> Okay. Could you prepare the patch?
OK, give me some time.
Thanks,
Vladimir
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at
On Tue, Nov 03, 2015 at 05:26:15PM +0200, Kirill A. Shutemov wrote:
...
> @@ -812,60 +812,104 @@ static int page_referenced_one(struct page *page,
> struct vm_area_struct *vma,
> spinlock_t *ptl;
> int referenced = 0;
> struct page_referenced_arg *pra = arg;
> + pgd_t *pgd;
>
On Thu, Nov 05, 2015 at 02:36:06PM +0200, Kirill A. Shutemov wrote:
> On Thu, Nov 05, 2015 at 03:07:26PM +0300, Vladimir Davydov wrote:
> > @@ -849,30 +836,23 @@ static int page_referenced_one(struct page *page,
> > struct vm_area_struct *vma,
> > if (p
On Thu, Nov 05, 2015 at 11:24:59AM +0200, Kirill A. Shutemov wrote:
> On Thu, Nov 05, 2015 at 12:10:13PM +0300, Vladimir Davydov wrote:
> > On Tue, Nov 03, 2015 at 05:26:15PM +0200, Kirill A. Shutemov wrote:
> > ...
> > > @@ -56,23 +56,69 @@ static int page_idle_clear_
On Tue, Nov 03, 2015 at 05:26:15PM +0200, Kirill A. Shutemov wrote:
...
> @@ -56,23 +56,69 @@ static int page_idle_clear_pte_refs_one(struct page *page,
> {
> struct mm_struct *mm = vma->vm_mm;
> spinlock_t *ptl;
> + pgd_t *pgd;
> + pud_t *pud;
> pmd_t *pmd;
> pte_t
On Thu, Oct 29, 2015 at 10:52:28AM -0700, Johannes Weiner wrote:
...
> Now, you mentioned that you'd rather see the socket buffers accounted
> at the allocator level, but I looked at the different allocation paths
> and network protocols and I'm not convinced that this makes sense. We
> don't want
On Wed, Oct 28, 2015 at 11:58:10AM -0700, Johannes Weiner wrote:
> On Wed, Oct 28, 2015 at 11:20:03AM +0300, Vladimir Davydov wrote:
> > Then you'd better not touch existing tcp limits at all, because they
> > just work, and the logic behind them is very close to that of glob
On Tue, Oct 27, 2015 at 09:01:08AM -0700, Johannes Weiner wrote:
...
> > > But regardless of tcp window control, we need to account socket memory
> > > in the main memory accounting pool where pressure is shared (to the
> > > best of our abilities) between all accounted memory consumers.
> > >
> >
"mm: memcontrol: lockless page counters")
> CC: sta...@vger.kernel.org
> Reported-by: Ben Hutchings
> Signed-off-by: Michal Hocko
Reviewed-by: Vladimir Davydov
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
On Mon, Oct 26, 2015 at 01:22:16PM -0400, Johannes Weiner wrote:
> On Thu, Oct 22, 2015 at 09:45:10PM +0300, Vladimir Davydov wrote:
> > Hi Johannes,
> >
> > On Thu, Oct 22, 2015 at 12:21:28AM -0400, Johannes Weiner wrote:
> > ...
> > > Patch #5 adds account
On Thu, Oct 22, 2015 at 03:09:43PM -0400, Johannes Weiner wrote:
> On Thu, Oct 22, 2015 at 09:46:12PM +0300, Vladimir Davydov wrote:
> > On Thu, Oct 22, 2015 at 12:21:31AM -0400, Johannes Weiner wrote:
> > > The tcp memory controller has extensive provisions for future memor
On Thu, Oct 22, 2015 at 12:21:36AM -0400, Johannes Weiner wrote:
...
> @@ -185,8 +183,29 @@ static void vmpressure_work_fn(struct work_struct *work)
> vmpr->reclaimed = 0;
> spin_unlock(&vmpr->sr_lock);
>
> + level = vmpressure_calc_level(scanned, reclaimed);
> +
> + if (level
On Thu, Oct 22, 2015 at 12:21:35AM -0400, Johannes Weiner wrote:
...
> @@ -2437,6 +2439,10 @@ static bool shrink_zone(struct zone *zone, struct
> scan_control *sc,
> }
> }
>
> + vmpressure(sc->gfp_mask, memcg,
> +
On Thu, Oct 22, 2015 at 12:21:33AM -0400, Johannes Weiner wrote:
...
> @@ -5500,13 +5524,38 @@ void sock_release_memcg(struct sock *sk)
> */
> bool mem_cgroup_charge_skmem(struct mem_cgroup *memcg, unsigned int nr_pages)
> {
> + unsigned int batch = max(CHARGE_BATCH, nr_pages);
> stru
On Thu, Oct 22, 2015 at 12:21:31AM -0400, Johannes Weiner wrote:
> The tcp memory controller has extensive provisions for future memory
> accounting interfaces that won't materialize after all. Cut the code
> base down to what's actually used, now and in the likely future.
>
> - There won't be any
Hi Johannes,
On Thu, Oct 22, 2015 at 12:21:28AM -0400, Johannes Weiner wrote:
...
> Patch #5 adds accounting and tracking of socket memory to the unified
> hierarchy memory controller, as described above. It uses the existing
> per-cpu charge caches and triggers high limit reclaim asynchroneously.
On Tue, Oct 20, 2015 at 09:56:06AM -0400, Johannes Weiner wrote:
> On Tue, Oct 20, 2015 at 03:19:20PM +0300, Vladimir Davydov wrote:
> > On Mon, Oct 19, 2015 at 02:13:35PM -0400, Johannes Weiner wrote:
> > > cb731d6 ("vmscan: per memory cgroup slab shrinkers") sought to
On Mon, Oct 19, 2015 at 02:13:35PM -0400, Johannes Weiner wrote:
> cb731d6 ("vmscan: per memory cgroup slab shrinkers") sought to
> optimize accumulating slab reclaim results in sc->nr_reclaimed only
> once per zone, but the memcg hierarchy walk itself uses
> sc->nr_reclaimed as an exit condition.
code cleanup.
Signed-off-by: Vladimir Davydov
---
include/linux/mm.h | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6adf4167d664..30ef3b535444 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -1553,8 +1553,10 @@ s
On Fri, Oct 16, 2015 at 05:19:32PM -0700, Johannes Weiner wrote:
...
> I think it'd be better to have an outer function than a magic
> parameter for the memcg lookup. Could we fold this in there?
Yeah, that looks neater. Thanks!
Andrew, could you please fold this one too?
>
> ---
>
> Signed-of
On Fri, Oct 16, 2015 at 03:12:23PM -0700, Hugh Dickins wrote:
...
> Are you expecting to use mem_cgroup_from_kmem() from other places
> in future? Seems possible; but at present it's called from only
Not in the near future. At least, currently I can't think of any other
use for it except list_lru
On Fri, Oct 16, 2015 at 04:17:26PM +0300, Kirill A. Shutemov wrote:
> On Mon, Oct 05, 2015 at 01:21:43AM +0300, Vladimir Davydov wrote:
> > Before the previous patch, __mem_cgroup_from_kmem had to handle two
> > types of kmem - slab pages and pages allocated with alloc_kmem_pages -
On Thu, Oct 08, 2015 at 02:17:35PM -0700, Andrew Morton wrote:
> On Thu, 8 Oct 2015 19:02:40 +0300 Vladimir Davydov
> wrote:
>
> > Currently, we do not clear pointers to per memcg caches in the
> > memcg_params.memcg_caches array when a global cache is destroyed with
>
warning is only printed once if there are objects left in
the cache being destroyed.
Signed-off-by: Vladimir Davydov
---
mm/slab_common.c | 13 +++--
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index ab1f20e303e4..fba78e4a6643 100644
introduce any functional changes.
Signed-off-by: Vladimir Davydov
---
mm/slab_common.c | 37 ++---
1 file changed, 18 insertions(+), 19 deletions(-)
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 113a6fd597db..c8d2ed7f8330 100644
--- a/mm/slab_common.c
+++ b/mm
kmem_cache_destroy is called (due to a memory leak). If this
happens, the entries in the array will point to already freed areas,
which is likely to result in data corruption when the cache is reused
(via slab merging).
Signed-off-by: Vladimir Davydov
---
mm/slab.h| 6
mm/slab_common.c | 93
code instead
of bool to conform to mem_cgroup_try_charge.
Signed-off-by: Vladimir Davydov
---
include/linux/memcontrol.h | 69 +-
mm/memcontrol.c| 39 +++---
mm/page_alloc.c| 18 ++--
3 files changed, 32 inserti
we can fold it into mem_cgroup_from_kmem.
Signed-off-by: Vladimir Davydov
---
include/linux/memcontrol.h | 7 ---
mm/memcontrol.c| 18 --
2 files changed, 4 insertions(+), 21 deletions(-)
diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index
patch switches slab to charge-after-alloc design. Since this
design is already used for all other memcg charges, it should not make
any difference.
Signed-off-by: Vladimir Davydov
---
include/linux/memcontrol.h | 9 ++
mm/memcontrol.c
On Wed, Sep 30, 2015 at 12:51:18PM -0700, Greg Thelen wrote:
>
> Vladimir Davydov wrote:
...
> > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> > index 416509e26d6d..a190719c2f46 100644
> > --- a/include/linux/page-flags.h
> > +++ b/include/li
On Tue, Sep 29, 2015 at 03:57:11PM -0700, Andrew Morton wrote:
> On Sat, 26 Sep 2015 13:45:54 +0300 Vladimir Davydov
> wrote:
>
> > Pipe buffers can be generated unrestrictedly by an unprivileged
> > userspace process, so they shouldn't go unaccounted.
> >
On Tue, Sep 29, 2015 at 03:43:47PM -0700, Andrew Morton wrote:
> On Sat, 26 Sep 2015 13:45:53 +0300 Vladimir Davydov
> wrote:
>
> > Currently, to charge a page to kmemcg one should use alloc_kmem_pages
> > helper. When the page is not needed anymore it must be freed wit
Hi,
There are at least two object types left that can be allocated by an
unprivileged process and go uncharged to memcg - pipe buffers and page
tables. This patch set tries to make them accounted.
Comments are welcome.
Thanks,
Vladimir Davydov (5):
mm: uncharge kmem pages from generic
Pipe buffers can be generated unrestrictedly by an unprivileged
userspace process, so they shouldn't go unaccounted.
Signed-off-by: Vladimir Davydov
---
fs/pipe.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/pipe.c b/fs/pipe.c
index 8865f7963700..6880884b70b0 1
uncharge code to
the generic free path and zaps free_kmem_pages helper. To distinguish
kmem pages from other page types, it makes alloc_kmem_pages initialize
page->_mapcount to a special value and introduces a new PageKmem helper,
which returns true if it sees this value.
Signed-off-by: Vladi
: Vladimir Davydov
---
mm/memcontrol.c | 21 ++---
mm/swap.c | 3 +--
2 files changed, 15 insertions(+), 9 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 6ddaeba34e09..a61fe1604f49 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -5420,15 +5420,18
Works exactly as __get_free_pages except it also tries to charge newly
allocated pages to kmemcg. It will be used by the next patch.
Signed-off-by: Vladimir Davydov
---
include/linux/gfp.h | 1 +
mm/page_alloc.c | 12
2 files changed, 13 insertions(+)
diff --git a/include
xed
the problem in case of oom-killer, this patch attempts to fix it for
memory cgroup on x86 by making pte_alloc_one and friends use
alloc_kmem_pages instead of alloc_pages so as to charge page table pages
to kmemcg.
Signed-off-by: Vladimir Davydov
---
arch/x86/include/asm/pgalloc.h | 5 +++--
On Wed, Sep 23, 2015 at 06:50:22PM +0900, Sergey Senozhatsky wrote:
> On (09/23/15 11:43), Michal Hocko wrote:
> [..]
> > > > the previous name was already null terminated,
> > >
> > > Yeah, but if the old name is shorter than the new one, set_task_comm()
> > > overwrites the terminating null of t
On Wed, Sep 23, 2015 at 06:13:54PM +0900, Sergey Senozhatsky wrote:
> On (09/23/15 11:06), Vladimir Davydov wrote:
> > Hi,
> >
> > On Tue, Sep 22, 2015 at 04:30:13PM -0700, David Rientjes wrote:
> > > The oom killer takes task_lock() in a couple of places solely
Hi,
On Tue, Sep 22, 2015 at 04:30:13PM -0700, David Rientjes wrote:
> The oom killer takes task_lock() in a couple of places solely to protect
> printing the task's comm.
>
> A process's comm, including current's comm, may change due to
> /proc/pid/comm or PR_SET_NAME.
>
> The comm will always b
On Mon, Sep 21, 2015 at 02:10:39PM +1000, Stephen Rothwell wrote:
> After merging the akpm-current tree, today's linux-next build (x86_64
> allmodconfig) failed like this:
>
> mm/vmscan.c: In function 'sane_reclaim':
> mm/vmscan.c:178:2: error: implicit declaration of function 'cgroup_on_dfl'
> [
On Sat, Sep 19, 2015 at 06:26:23PM +0200, Michal Hocko wrote:
> On Fri 18-09-15 18:43:23, Vladimir Davydov wrote:
> [...]
> > Fixes: acc067d59a1f9 ("mm: make optimistic check for swapin readahead")
>
> This sha will not exist after the patch gets merged to the Linus
[] ret_from_fork+0x3f/0x70
[] ? kthread_freezable_should_stop+0x70/0x70
RIP [] __alloc_pages_nodemask+0xc2/0x2c0
RSP
CR2: 00014028
Fixes: acc067d59a1f9 ("mm: make optimistic check for swapin readahead")
Signed-off-by: Vladimir Davydov
---
mm/huge_memory.c | 2 +-
1
98cf2f360c ("memcg: export struct mem_cgroup")
Signed-off-by: Vladimir Davydov
---
mm/vmscan.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index db5339dd4a32..dbc3b3ae48de 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -175,7 +175,7 @@ stat
gt; [ Take memcg_aware check outside for_each loop: Vldimir]
> Signed-off-by: Raghavendra K T
Reviewed-by: Vladimir Davydov
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vg
On Mon, Sep 14, 2015 at 06:35:59PM +0530, Raghavendra K T wrote:
> On 09/14/2015 05:34 PM, Vladimir Davydov wrote:
> >On Mon, Sep 14, 2015 at 05:09:31PM +0530, Raghavendra K T wrote:
> >>On 09/14/2015 02:30 PM, Vladimir Davydov wrote:
> >>>On Wed, Sep 09, 2015 at 12:
On Mon, Sep 14, 2015 at 05:09:31PM +0530, Raghavendra K T wrote:
> On 09/14/2015 02:30 PM, Vladimir Davydov wrote:
> >On Wed, Sep 09, 2015 at 12:01:46AM +0530, Raghavendra K T wrote:
> >>The functions used in the patch are in slowpath, which gets called
> >>whenever a
Hi,
On Wed, Sep 09, 2015 at 12:01:46AM +0530, Raghavendra K T wrote:
> The functions used in the patch are in slowpath, which gets called
> whenever alloc_super is called during mounts.
>
> Though this should not make difference for the architectures with
> sequential numa node ids, for the power
Hi Tejun, Michal
On Fri, Sep 04, 2015 at 11:44:48AM -0400, Tejun Heo wrote:
...
> > I admit I may be mistaken, but if I'm right, we may end up with really
> > complex memcg reclaim logic trying to closely mimic behavior of buddy
> > alloc with all its historic peculiarities. That's why I don't wan
On Thu, Sep 03, 2015 at 12:32:43PM -0400, Tejun Heo wrote:
> On Wed, Sep 02, 2015 at 12:30:39PM +0300, Vladimir Davydov wrote:
> ...
> > To sum it up. Basically, there are two ways of handling kmemcg charges:
> >
> > 1. Make the memcg try_charge mimic alloc_pages be
On Wed, Sep 02, 2015 at 01:16:47PM -0500, Christoph Lameter wrote:
> On Wed, 2 Sep 2015, Vladimir Davydov wrote:
>
> > Slab is a kind of abnormal alloc_pages user. By calling alloc_pages_node
> > with __GFP_THISNODE and w/o __GFP_WAIT before falling back to
> > alloc
ng Johannes to Cc (I noticed that I accidentally left him
out), because this discussion seems to be fundamental and may affect
our further steps dramatically.
]
On Tue, Sep 01, 2015 at 08:38:50PM +0200, Michal Hocko wrote:
> On Tue 01-09-15 19:55:54, Vladimir Davydov wrote:
> > On Tue, Sep
On Tue, Sep 01, 2015 at 05:01:20PM +0200, Michal Hocko wrote:
> On Tue 01-09-15 16:40:03, Vladimir Davydov wrote:
> > On Tue, Sep 01, 2015 at 02:36:12PM +0200, Michal Hocko wrote:
> > > On Mon 31-08-15 17:20:49, Vladimir Davydov wrote:
> {...}
> > > > 1. SLAB. S
On Tue, Sep 01, 2015 at 02:36:12PM +0200, Michal Hocko wrote:
> On Mon 31-08-15 17:20:49, Vladimir Davydov wrote:
> > On Mon, Aug 31, 2015 at 03:24:15PM +0200, Michal Hocko wrote:
> > > On Sun 30-08-15 22:02:16, Vladimir Davydov wrote:
> >
> > > > Tejun repor
On Mon, Aug 31, 2015 at 03:22:22PM -0500, Christoph Lameter wrote:
> On Mon, 31 Aug 2015, Vladimir Davydov wrote:
>
> > I totally agree that we should strive to make a kmem user feel roughly
> > the same in memcg as if it were running on a host with equal amount of
> > RA
On Mon, Aug 31, 2015 at 01:03:09PM -0400, Tejun Heo wrote:
> On Mon, Aug 31, 2015 at 07:51:32PM +0300, Vladimir Davydov wrote:
> ...
> > If we want to allow slab/slub implementation to invoke try_charge
> > wherever it wants, we need to introduce an asynchronous thread doing
On Mon, Aug 31, 2015 at 11:47:56AM -0400, Tejun Heo wrote:
> On Mon, Aug 31, 2015 at 06:18:14PM +0300, Vladimir Davydov wrote:
> > We have to be cautious about placing memcg_charge in slab/slub. To
> > understand why, consider SLAB case, which first tries to allocate from
>
On Mon, Aug 31, 2015 at 10:46:04AM -0400, Tejun Heo wrote:
> Hello, Vladimir.
>
> On Mon, Aug 31, 2015 at 05:20:49PM +0300, Vladimir Davydov wrote:
> ...
> > That being said, this is the fix at the right layer.
>
> While this *might* be a necessary workaround for the har
On Mon, Aug 31, 2015 at 10:39:39AM -0400, Tejun Heo wrote:
> On Mon, Aug 31, 2015 at 05:30:08PM +0300, Vladimir Davydov wrote:
> > slab/slub can issue alloc_pages() any time with any flags they want and
> > it won't be accounted to memcg, because kmem is accounted at slab/sl
On Mon, Aug 31, 2015 at 09:43:35AM -0400, Tejun Heo wrote:
> On Mon, Aug 31, 2015 at 03:24:15PM +0200, Michal Hocko wrote:
> > Right but isn't that what the caller explicitly asked for? Why should we
> > ignore that for kmem accounting? It seems like a fix at a wrong layer to
> > me. Either we shou
On Mon, Aug 31, 2015 at 03:24:15PM +0200, Michal Hocko wrote:
> On Sun 30-08-15 22:02:16, Vladimir Davydov wrote:
> > Tejun reported that sometimes memcg/memory.high threshold seems to be
> > silently ignored if kmem accounting is enabled:
> >
> > http://www.spinics.
used
without it is fallback_alloc(), which, in contrast to other
cache_grow() users, preallocates a page and passes it to cache_grow()
so that the latter does not need to invoke kmem_getpages() by itself.
Reported-by: Tejun Heo
Signed-off-by: Vladimir Davydov
orwarding it to memcg_charge_slab()
if the context allows.
Reported-by: Tejun Heo
Signed-off-by: Vladimir Davydov
---
mm/slub.c | 24 +++-
1 file changed, 11 insertions(+), 13 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index e180f8dcd06d..416a332277cb 100644
--- a/mm/slub.c
pages, memcg reclaim will
not get invoked on kmem allocations, which will lead to uncontrollable
growth of memory usage no matter what memory.high is set to.
This patch set attempts to fix this issue. For more details please see
comments to individual patches.
Thanks,
Vladimir Davydov (2):
mm
On Wed, Aug 19, 2015 at 10:05:40AM -0400, Jeff Layton wrote:
> If the list_head is empty then we'll have called list_lru_from_kmem
> for nothing. Move that call inside of the list_empty if block.
>
> Cc: Vladimir Davydov
> Signed-off-by: Jeff Layton
Reviewed-by: Vladimir D
On Sun, Aug 09, 2015 at 11:12:25PM +0900, Kamezawa Hiroyuki wrote:
> On 2015/08/08 22:05, Vladimir Davydov wrote:
> >On Fri, Aug 07, 2015 at 10:38:16AM +0900, Kamezawa Hiroyuki wrote:
...
> >>All ? hmm. It seems that mixture of record of global memory pressure and of
> >&g
On Fri, Aug 07, 2015 at 10:38:16AM +0900, Kamezawa Hiroyuki wrote:
> On 2015/08/06 17:59, Vladimir Davydov wrote:
> >On Wed, Aug 05, 2015 at 10:34:58AM +0900, Kamezawa Hiroyuki wrote:
> >>I wonder, rather than collecting more data, rough calculation can help the
> >>
On Wed, Aug 05, 2015 at 10:34:58AM +0900, Kamezawa Hiroyuki wrote:
> Reading discussion, I feel storing more data is difficult, too.
Yep, even with the current 16-bit memcg id. Things would get even worse
if we wanted to extend it one day (will we?)
>
> I wonder, rather than collecting more dat
On Mon, Aug 03, 2015 at 04:55:32PM -0400, Johannes Weiner wrote:
> On Mon, Aug 03, 2015 at 04:52:29PM +0300, Vladimir Davydov wrote:
> > On Mon, Aug 03, 2015 at 09:23:58AM -0400, Johannes Weiner wrote:
> > > On Mon, Aug 03, 2015 at 03:04:22PM +0300, Vladimir Davydov wrote:
> &
401 - 500 of 1505 matches
Mail list logo