Adding related people.
The thread starts at:
http://lkml.kernel.org/r/1562795006.8510.19.ca...@lca.pw
On Mon, Jul 15, 2019 at 8:01 PM Yang Shi wrote:
>
>
>
> On 7/15/19 6:36 PM, Qian Cai wrote:
> >
> >> On Jul 15, 2019, at 8:22 PM, Yang Shi wrote:
> >>
> >>
> >>
> >> On 7/15/19 2:23 PM, Qian
: bba4c5f96ce4 ("mm/z3fold.c: support page migration")
> Signed-off-by: Henry Burns
Reviewed-by: Shakeel Butt
> ---
> Changelog since v1:
> - Made comments explicityly refer to new_zhdr->buddy.
>
> mm/z3fold.c | 10 ++
> 1 file changed, 10 insertions(+)
&
: bba4c5f96ce4 ("mm/z3fold.c: support page migration")
> Signed-off-by: Henry Burns
Reviewed-by: Shakeel Butt
> ---
> mm/z3fold.c | 10 ++
> 1 file changed, 10 insertions(+)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index 42ef9955117c..9da471bcab93 100644
y related flags from the call to kmem_cache_alloc()
> for our slots since it is a kernel allocation.
>
> Signed-off-by: Henry Burns
Reviewed-by: Shakeel Butt
> ---
> mm/z3fold.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/mm/z3fo
_page(oldpage_newpage)
> a_ops->migrate_page(oldpage, newpage)
> z3fold_page_migrate(oldpage, newpage)
> trylock_page(oldpage)
>
>
> Signed-off-by: Henry Burns
Reviewed-by: Shakeel Butt
Please add the Fixes tag as well.
> ---
> mm/z3fold.c | 6 --
mm/z3fold.c: add structure for buddy handles")
>
> Reported-by: Henry Burns
> Signed-off-by: Vitaly Wool
Reviewed-by: Shakeel Butt
> ---
> mm/z3fold.c | 5 -
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index
.org/lkml/2019/5/29/73 and
> Shakeel Butt https://lkml.org/lkml/2019/6/4/973
>
> zswap compresses swap pages into a dynamically allocated RAM-based
> memory pool. The memory pool should be zbud, z3fold or zsmalloc.
> All of them will allocate unmovable pages. It will increase the
&g
Cc: a...@linux-foundation.org
On Wed, Jun 5, 2019 at 3:06 AM Hui Zhu wrote:
>
> As a zpool_driver, zsmalloc can allocate movable memory because it
> support migate pages.
> But zbud and z3fold cannot allocate movable memory.
>
> This commit adds malloc_support_movable to zpool_driver.
> If a
On Tue, Jul 2, 2019 at 11:03 PM Vitaly Wool wrote:
>
> On Tue, Jul 2, 2019 at 6:57 PM Henry Burns wrote:
> >
> > On Tue, Jul 2, 2019 at 12:45 AM Vitaly Wool wrote:
> > >
> > > Hi Henry,
> > >
> > > On Mon, Jul 1, 2019 at 8:31 PM Henry Burns wrote:
> > > >
> > > > Running z3fold stress testing
22.41036-1-henrybu...@google.com
> Signed-off-by: Henry Burns
> Suggested-by: Vitaly Wool
> Acked-by: Vitaly Wool
> Acked-by: David Rientjes
> Cc: Shakeel Butt
> Cc: Vitaly Vul
> Cc: Mike Rapoport
> Cc: Xidong Wang
> Cc: Jonathan Adams
> Cc:
> Signed-off-b
On Mon, Jul 1, 2019 at 5:51 PM Henry Burns wrote:
>
> __SetPageMovable() expects it's page to be locked, but z3fold.c doesn't
> lock the page. Following zsmalloc.c's example we call trylock_page() and
> unlock_page(). Also makes z3fold_page_migrate() assert that newpage is
> passed in locked, as
he waker has requested.
Fixes: e716f2eb24de ("mm, vmscan: prevent kswapd sleeping prematurely
due to mismatched classzone_idx")
Signed-off-by: Shakeel Butt
---
Changelog since v1:
- fixed the patch based on Yang Shi's comment.
mm/vmscan.c | 27 +++
1 file changed
On Sat, Jun 29, 2019 at 7:05 AM Alexey Dobriyan wrote:
>
> > - if (flags & SLAB_PANIC)
> > - panic("Cannot create slab %s size=%u realsize=%u order=%u
> > offset=%u flags=%lx\n",
> > - s->name, s->size, s->size,
> > - oo_order(s->oo),
On Fri, Jun 28, 2019 at 11:53 AM Yang Shi wrote:
>
>
>
> On 6/27/19 6:55 PM, Shakeel Butt wrote:
> > On production we have noticed hard lockups on large machines running
> > large jobs due to kswaps hoarding lru lock within isolate_lru_pages when
> > sc->recla
remove the
task_in_mem_cgroup() check altogether.
Signed-off-by: Shakeel Butt
Signed-off-by: Tetsuo Handa
Acked-by: Michal Hocko
Acked-by: Roman Gushchin
Cc: David Rientjes
Cc: Johannes Weiner
Cc: KOSAKI Motohiro
Cc: Nick Piggin
Cc: Paul Jackson
Cc: Vladimir Davydov
Cc: Andrew Morton
t/mempolicy intersection check from
oom_unkillable_task() and make sure cpuset/mempolicy intersection check is
only done in the global oom context.
Signed-off-by: Shakeel Butt
Reported-by: syzbot+d0fc9d3c166bc5e4a...@syzkaller.appspotmail.com
Acked-by: Michal Hocko
Acked-by: Roman Gushchin
Cc:
and use
mem_cgroup_scan_tasks to selectively traverse only processes of the target
memcg hierarchy during memcg OOM.
Signed-off-by: Shakeel Butt
Acked-by: Michal Hocko
Acked-by: Roman Gushchin
Cc: Johannes Weiner
Cc: Tetsuo Handa
Cc: Vladimir Davydov
Cc: David Rientjes
Cc: KOSAKI Motohiro
Cc
On Tue, Jun 25, 2019 at 11:55 PM Michal Hocko wrote:
>
> On Mon 24-06-19 14:26:31, Shakeel Butt wrote:
> > The commit ef08e3b4981a ("[PATCH] cpusets: confine oom_killer to
> > mem_exclusive cpuset") introduces a heuristic where a potential
> > oom-killer vic
On Tue, Jun 25, 2019 at 11:38 PM Michal Hocko wrote:
>
> On Mon 24-06-19 14:26:30, Shakeel Butt wrote:
> > oom_unkillable_task() can be called from three different contexts i.e.
> > global OOM, memcg OOM and oom_score procfs interface. At the moment
> > oo
s. It can hoard the LRU spinlock while skipping over 100s of
GiBs of pages.
This patch only fixes the (1). The (2) needs a more fundamental solution.
Fixes: e716f2eb24de ("mm, vmscan: prevent kswapd sleeping prematurely
due to mismatched classzone_idx")
Signed-off-by: Shakeel Butt
---
mm
away. Instead rely on kmem_cache
> as an intermediate object.
>
> Make sure that vmstats and shrinker lists are working as previously,
> as well as /proc/kpagecgroup interface.
>
> Signed-off-by: Roman Gushchin
> Acked-by: Vladimir Davydov
Reviewed-by: Shakeel Butt
;
> Signed-off-by: Roman Gushchin
The reparenting of top level memcg and "return true" is fixed in the
later patch.
Reviewed-by: Shakeel Butt
t; user0m0.216suser0m0.181s
> sys 0m0.824ssys 0m0.864s
>
> real0m1.350sreal0m1.295s
> user0m0.200s user0m0.190s
> sys 0m0.842ssys 0m0.811s
>
> So it looks like the difference is not noticeable in this test.
>
> Signed-off-by: Roman Gushchin
> Acked-by: Vladimir Davydov
Reviewed-by: Shakeel Butt
q context,
> which will be required in order to implement asynchronous release
> of kmem_caches.
>
> So let's switch over to the irq-save flavor of the spinlock-based
> synchronization.
>
> Signed-off-by: Roman Gushchin
Reviewed-by: Shakeel Butt
the kmem cache
destruction and allocations.
> so no new memcg kmem_cache
> creation can be scheduled after the flag is set. And if it was
> scheduled before, flush_memcg_workqueue() will wait for it anyway.
>
> So let's drop this check to simplify the code.
>
> Signed-off-by: R
fter_rcu() SLUB-only
>
> For consistency, all allocator-specific functions start with "__".
>
> Signed-off-by: Roman Gushchin
> Acked-by: Vladimir Davydov
Reviewed-by: Shakeel Butt
work -> work
>
> And RCU/delayed work callbacks in slab common code:
> kmemcg_deactivate_rcufn -> kmemcg_rcufn
> kmemcg_deactivate_workfn -> kmemcg_workfn
>
> This patch contains no functional changes, only renamings.
>
> Signed-off-by: Roman Gushchin
> Acked-by: Vladimir Davydov
Reviewed-by: Shakeel Butt
the task_in_mem_cgroup() check altogether.
Signed-off-by: Shakeel Butt
Signed-off-by: Tetsuo Handa
---
Changelog since v2:
- Further divided the patch into two patches.
- Incorporated the task_in_mem_cgroup() from Tetsuo.
Changelog since v1:
- Divide the patch into two patches.
fs/proc/base.c | 2
mem_cgroup_scan_tasks to selectively traverse only processes of the
target memcg hierarchy during memcg OOM.
Signed-off-by: Shakeel Butt
Acked-by: Michal Hocko
---
Changelog since v2:
- Updated the commit message.
Changelog since v1:
- Divide the patch into two patches.
mm/oom_kill.c | 68
000600
The fix is to decouple the cpuset/mempolicy intersection check from
oom_unkillable_task() and make sure cpuset/mempolicy intersection check
is only done in the global oom context.
Reported-by: syzbot+d0fc9d3c166bc5e4a...@syzkaller.appspotmail.com
Signed-off-by: Shakeel Butt
---
Ch
parented caches by adding a new slab flag "SLAB_DEACTIVATED" to those
> kmem caches that will be reparent'ed if it cannot be destroyed completely.
>
> For the reparent'ed memcg kmem caches, the tag ":deact" will now be
> shown in /memcg_slabinfo.
>
> S
mask+0x49/0x70
> [ 381.346287] softirqs last enabled at (10262): []
> cgroup_idr_replace+0x3a/0x50
> [ 381.346290] softirqs last disabled at (10260): []
> cgroup_idr_replace+0x1d/0x50
> [ 381.346293] ---[ end trace b324ba73eb3659f0 ]---
>
> v2: fixed return value from memcg_ch
ach_cpu_mask+0x49/0x70
> [ 381.346287] softirqs last enabled at (10262): []
> cgroup_idr_replace+0x3a/0x50
> [ 381.346290] softirqs last disabled at (10260): []
> cgroup_idr_replace+0x1d/0x50
> [ 381.346293] ---[ end trace b324ba73eb3659f0 ]---
>
> Reported-by: Andrei Vagin
> S
On Wed, Jun 19, 2019 at 10:50 PM Michal Hocko wrote:
>
> On Wed 19-06-19 16:25:14, Shakeel Butt wrote:
> > Currently for CONFIG_SLUB, if a memcg kmem cache creation is failed and
> > the corresponding root kmem cache has SLAB_PANIC flag, the kernel will
> > be cras
On Thu, Jun 20, 2019 at 7:24 AM Waiman Long wrote:
>
> On 6/19/19 7:48 PM, Shakeel Butt wrote:
> > Hi Waiman,
> >
> > On Wed, Jun 19, 2019 at 10:16 AM Waiman Long wrote:
> >> There are concerns about memory leaks from extensive use of memory
> >> cgroups
34 1 1
> xfs_inode 89:dead 23 34 1 1
> xfs_inode 85 4 34 1 1
> xfs_inode 84 9 34 1 1
>
> The css id of the memcg is also listed. If a memcg is not online,
> the tag &
this behavior. So, to keep the behavior consistent between
SLAB and SLUB, removing the panic for memcg kmem cache creation
failures. The root kmem cache creation failure for SLAB_PANIC correctly
panics for both SLAB and SLUB.
Reported-by: Dave Hansen
Signed-off-by: Shakeel Butt
---
mm/slub.c | 4
1
On Wed, Jun 19, 2019 at 3:50 PM Dave Hansen wrote:
>
> I have a bit of a grievance to file. :)
>
> I'm seeing "Cannot create slab..." panic()s coming from
> kmem_cache_open() when trying to create memory cgroups on a Fedora
> system running 5.2-rc's. The panic()s happen when failing to create
>
On Wed, Jun 19, 2019 at 8:30 AM Waiman Long wrote:
>
> On 6/19/19 11:18 AM, Shakeel Butt wrote:
> > On Wed, Jun 19, 2019 at 7:46 AM Waiman Long wrote:
> >> There are concerns about memory leaks from extensive use of memory
> >> cgroups as each memory cgroup crea
On Wed, Jun 19, 2019 at 7:46 AM Waiman Long wrote:
>
> There are concerns about memory leaks from extensive use of memory
> cgroups as each memory cgroup creates its own set of kmem caches. There
> is a possiblity that the memcg kmem caches may remain even after the
> memory cgroup removal.
On Mon, Jun 17, 2019 at 6:45 PM Andrew Morton wrote:
>
> On Mon, 17 Jun 2019 06:23:07 -0700 Shakeel Butt wrote:
>
> > > Here is a patch to use CSS_TASK_ITER_PROCS.
> > >
> > > From 415e52cf55bc4ad931e4f005421b827f0b02693d Mon Sep 17 00:00:00 2001
> > >
will do a bogus
cpuset_mems_allowed_intersects() check. Removing that.
Signed-off-by: Shakeel Butt
---
Changelog since v1:
- Divide the patch into two patches.
fs/proc/base.c | 3 +--
include/linux/oom.h | 1 -
mm/oom_kill.c | 28 +++-
3 files changed, 16
dump_tasks() currently goes through all the processes present on the
system even for memcg OOMs. Change dump_tasks() similar to
select_bad_process() and use mem_cgroup_scan_tasks() to selectively
traverse the processes of the memcgs during memcg OOM.
Signed-off-by: Shakeel Butt
---
Changelog
On Mon, Jun 17, 2019 at 9:17 AM Michal Hocko wrote:
>
> On Mon 17-06-19 08:59:54, Shakeel Butt wrote:
> > Currently oom_unkillable_task() checks mems_allowed even for memcg OOMs
> > which does not make sense as memcg OOMs can not be triggered due to
> > numa constraints. F
().
Signed-off-by: Shakeel Butt
---
fs/proc/base.c | 3 +-
include/linux/oom.h | 3 +-
mm/oom_kill.c | 100 +---
3 files changed, 60 insertions(+), 46 deletions(-)
diff --git a/fs/proc/base.c b/fs/proc/base.c
index b8d5d100ed4a..69b0d1b6583d
On Sun, Jun 16, 2019 at 8:14 AM Tetsuo Handa
wrote:
>
> On 2019/06/16 16:37, Tetsuo Handa wrote:
> > On 2019/06/16 6:33, Tetsuo Handa wrote:
> >> On 2019/06/16 3:50, Shakeel Butt wrote:
> >>>> While dump_tasks() traverses only each thread group,
> >&g
On Sat, Jun 15, 2019 at 9:49 AM Tetsuo Handa
wrote:
>
> On 2019/06/16 1:11, Shakeel Butt wrote:
> > On Sat, Jun 15, 2019 at 6:50 AM Michal Hocko wrote:
> >> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> >> index 5a58778c91d4..43eb479a5dc7 100644
> >> --- a
On Sat, Jun 15, 2019 at 6:50 AM Michal Hocko wrote:
>
> On Fri 14-06-19 20:15:31, Shakeel Butt wrote:
> > On Fri, Jun 14, 2019 at 6:08 PM syzbot
> > wrote:
> > >
> > > Hello,
> > >
> > > syzbot found the following crash on:
> > &g
On Fri, Jun 14, 2019 at 6:08 PM syzbot
wrote:
>
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit:3f310e51 Add linux-next specific files for 20190607
> git tree: linux-next
> console output: https://syzkaller.appspot.com/x/log.txt?x=15ab8771a0
> kernel config:
iver support allocate movable memory, set it to true.
> And add zpool_malloc_support_movable check malloc_support_movable
> to make sure if a zpool support allocate movable memory.
>
> Signed-off-by: Hui Zhu
Reviewed-by: Shakeel Butt
IMHO no need to block this series on z3fold query.
&g
On Wed, Jun 5, 2019 at 3:06 AM Hui Zhu wrote:
>
> This is the third version that was updated according to the comments
> from Sergey Senozhatsky https://lkml.org/lkml/2019/5/29/73 and
> Shakeel Butt https://lkml.org/lkml/2019/6/4/973
>
> zswap compresses swap pages into a dyn
On Wed, Jun 5, 2019 at 10:14 AM Roman Gushchin wrote:
>
> On Tue, Jun 04, 2019 at 09:35:02PM -0700, Shakeel Butt wrote:
> > On Tue, Jun 4, 2019 at 7:45 PM Roman Gushchin wrote:
> > >
> > > Johannes noticed that reading the memcg kmem_cache pointer in
> > &
On Wed, Jun 5, 2019 at 3:06 AM Hui Zhu wrote:
>
> As a zpool_driver, zsmalloc can allocate movable memory because it
> support migate pages.
> But zbud and z3fold cannot allocate movable memory.
>
Cc: Vitaly
It seems like z3fold does support page migration but z3fold's malloc
is rejecting
mp_rmb() to be paired with smp_wmb() in
> memcg_create_kmem_cache().
>
> The same applies to memcg_create_kmem_cache() itself,
> which reads the same value without barriers and READ_ONCE().
>
> Suggested-by: Johannes Weiner
> Signed-off-by: Roman Gushchin
Reviewed-by:
On Sun, Jun 2, 2019 at 2:47 AM Hui Zhu wrote:
>
> This is the second version that was updated according to the comments
> from Sergey Senozhatsky in https://lkml.org/lkml/2019/5/29/73
>
> zswap compresses swap pages into a dynamically allocated RAM-based
> memory pool. The memory pool should be
On Tue, May 21, 2019 at 8:16 AM Johannes Weiner wrote:
>
> The kernel test robot noticed a 26% will-it-scale pagefault regression
> from commit 42a300353577 ("mm: memcontrol: fix recursive statistics
> correctness & scalabilty"). This appears to be caused by bouncing the
> additional cachelines
On Tue, May 28, 2019 at 1:42 AM Michal Hocko wrote:
>
> On Tue 28-05-19 11:04:46, Konstantin Khlebnikov wrote:
> > On 28.05.2019 10:38, Michal Hocko wrote:
> [...]
> > > Could you define the exact semantic? Ideally something for the manual
> > > page please?
> > >
> >
> > Like kswapd which works
On Mon, May 27, 2019 at 9:32 PM Shakeel Butt wrote:
>
> Syzbot reported following memory leak:
>
> da RBX: 0003 RCX: 00441f79
> BUG: memory leak
> unreferenced object 0x888114f26040 (size 32):
> comm "syz-executor626", pid 7056,
da>] do_syscall_64+0x76/0x1a0 arch/x86/entry/common.c:301
[<43d74ca0>] entry_SYSCALL_64_after_hwframe+0x44/0xa9
This is a simple off by one bug on the error path.
Reported-by: syzbot+f90a420dfe2b1b03c...@syzkaller.appspotmail.com
Signed-off-by: Shakeel Butt
---
mm/list_lru.c | 2
there will not be
any process in the internal nodes and thus no chance of local pressure.
Signed-off-by: Shakeel Butt
Reviewed-by: Roman Gushchin
Acked-by: Johannes Weiner
---
Changelog since v2:
- Added documentation.
Changelog since v1:
- refactor memory_events_show to share between events
On Fri, May 24, 2019 at 12:33 PM wrote:
>
> From: Ira Weiny
>
> RFC I have no idea if this is correct or not. But looking at
> release_pages() I see a call to both __ClearPageActive() and
> __ClearPageWaiters() while in __page_cache_release() I do not.
>
> Is this a bug which needs to be fixed?
workingset-a
> + cat workingset-a
> + ./mincore workingset-a
> 153600/153600 workingset-a
> + dd of=workingset-b bs=1M count=0 seek=600
> + cat workingset-b
> + ./mincore workingset-a workingset-b
> 124607/153600 workingset-a
> 87876/153600 workingset-b
> + cat workingset-b
> + ./mincore workingset-a workingset-b
> 81313/153600 workingset-a
> 133321/153600 workingset-b
> + cat workingset-b
> + ./mincore workingset-a workingset-b
> 63036/153600 workingset-a
> 153600/153600 workingset-b
>
> Cc: sta...@vger.kernel.org # 4.20+
> Signed-off-by: Johannes Weiner
Reviewed-by: Shakeel Butt
On Fri, May 24, 2019 at 10:06 AM Johannes Weiner wrote:
>
> On Fri, May 24, 2019 at 09:11:46AM -0700, Matthew Wilcox wrote:
> > On Thu, May 23, 2019 at 03:59:33PM -0400, Johannes Weiner wrote:
> > > My point is that we cannot have random drivers' internal data
> > > structures charge to and pin
On Thu, May 23, 2019 at 11:37 AM Matthew Wilcox wrote:
>
> On Thu, May 23, 2019 at 01:43:49PM -0400, Johannes Weiner wrote:
> > I noticed that recent upstream kernels don't account the xarray nodes
> > of the page cache to the allocating cgroup, like we used to do for the
> > radix tree nodes.
>
On Mon, May 20, 2019 at 7:55 PM Anshuman Khandual
wrote:
>
>
>
> On 05/20/2019 10:29 PM, Tim Murray wrote:
> > On Sun, May 19, 2019 at 11:37 PM Anshuman Khandual
> > wrote:
> >>
> >> Or Is the objective here is reduce the number of processes which get
> >> killed by
> >> lmkd by triggering
On Sun, May 19, 2019 at 8:53 PM Minchan Kim wrote:
>
> - Background
>
> The Android terminology used for forking a new process and starting an app
> from scratch is a cold start, while resuming an existing app is a hot start.
> While we continually try to improve the performance of cold starts,
On Fri, May 17, 2019 at 5:59 PM Roman Gushchin wrote:
>
> On Fri, May 17, 2019 at 05:18:18PM -0700, Shakeel Butt wrote:
> > The memory controller in cgroup v2 exposes memory.events file for each
> > memcg which shows the number of times events like low, high, max, oom
>
there will not be
any process in the internal nodes and thus no chance of local pressure.
Signed-off-by: Shakeel Butt
---
Changelog since v1:
- refactor memory_events_show to share between events and events.local
include/linux/memcontrol.h | 7 ++-
mm/memcontrol.c| 34
there will not be
any process in the internal nodes and thus no chance of local pressure.
Signed-off-by: Shakeel Butt
---
include/linux/memcontrol.h | 7 ++-
mm/memcontrol.c| 25 +
2 files changed, 31 insertions(+), 1 deletion(-)
diff --git a/include/linux
eness
> by a bool flag in struct list_lru.
>
> [v2] use the idea proposed by Vladimir -- the bool flag.
>
> Signed-off-by: Jiri Slaby
Reviewed-by: Shakeel Butt
> Cc: Johannes Weiner
> Cc: Michal Hocko
> Suggested-by: Vladimir Davydov
> Acked-by: Vladimir Davydov
> Cc:
From: Christopher Lameter
Date: Wed, May 15, 2019 at 7:00 AM
To: Roman Gushchin
Cc: Andrew Morton, Shakeel Butt, ,
, , Johannes Weiner,
Michal Hocko, Rik van Riel, Vladimir Davydov,
> On Tue, 14 May 2019, Roman Gushchin wrote:
>
> > To make this possible we need to introduce
From: Roman Gushchin
Date: Tue, May 14, 2019 at 2:54 PM
To: Andrew Morton, Shakeel Butt
Cc: , ,
, Johannes Weiner, Michal Hocko, Rik van Riel,
Christoph Lameter, Vladimir Davydov, , Roman
Gushchin
> Switching to an indirect scheme of getting mem_cgroup pointer for
> !root slab pages broke
From: Roman Gushchin
Date: Tue, May 14, 2019 at 2:54 PM
To: Andrew Morton, Shakeel Butt
Cc: , ,
, Johannes Weiner, Michal Hocko, Rik van Riel,
Christoph Lameter, Vladimir Davydov, , Roman
Gushchin
> Let's reparent memcg slab memory on memcg offlining. This allows us
> to release the
From: Roman Gushchin
Date: Tue, May 14, 2019 at 2:55 PM
To: Andrew Morton, Shakeel Butt
Cc: , ,
, Johannes Weiner, Michal Hocko, Rik van Riel,
Christoph Lameter, Vladimir Davydov, , Roman
Gushchin
> This commit makes several important changes in the lifecycle
> of a non-root kmem_cache,
-killer in the charging path for fanotify and inotify
event allocations.
Signed-off-by: Shakeel Butt
Acked-by: Michal Hocko
---
Changelog since v2:
- None
Changelog since v1:
- commit message updated
mm/memcontrol.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/mm
emcg, explicitly add
__GFP_RETRY_MAYFAIL to the fanotigy and inotify event allocations.
Signed-off-by: Shakeel Butt
Reviewed-by: Roman Gushchin
---
Changelog since v2:
- updated the comments.
Changelog since v1:
- Fixed usage of __GFP_RETRY_MAYFAIL flag.
fs/notify/fanotify/fanotify.c
From: Roman Gushchin
Date: Mon, May 13, 2019 at 1:22 PM
To: Shakeel Butt
Cc: Andrew Morton, Linux MM, LKML, Kernel Team, Johannes Weiner,
Michal Hocko, Rik van Riel, Christoph Lameter, Vladimir Davydov,
Cgroups
> On Fri, May 10, 2019 at 05:32:15PM -0700, Shakeel Butt wrote:
> > Fr
emcg, explicitly add
__GFP_RETRY_MAYFAIL to the fanotigy and inotify event allocations.
Signed-off-by: Shakeel Butt
Reviewed-by: Roman Gushchin
---
Changelog since v1:
- Fixed usage of __GFP_RETRY_MAYFAIL flag.
fs/notify/fanotify/fanotify.c| 5 -
fs/notify/inotify/inotify_fsnotify.c
-killer in the charging path for fanotify and inotify
event allocations.
Signed-off-by: Shakeel Butt
Acked-by: Michal Hocko
---
Changelog since v1:
- commit message updated.
mm/memcontrol.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
From: Roman Gushchin
Date: Wed, May 8, 2019 at 1:41 PM
To: Andrew Morton, Shakeel Butt
Cc: , ,
, Johannes Weiner, Michal Hocko, Rik van Riel,
Christoph Lameter, Vladimir Davydov, , Roman
Gushchin
> This commit makes several important changes in the lifecycle
> of a non-root kmem_cache,
From: Roman Gushchin
Date: Wed, May 8, 2019 at 1:41 PM
To: Andrew Morton, Shakeel Butt
Cc: , ,
, Johannes Weiner, Michal Hocko, Rik van Riel,
Christoph Lameter, Vladimir Davydov, , Roman
Gushchin
> Let's reparent memcg slab memory on memcg offlining. This allows us
> to release the memory
From: Roman Gushchin
Date: Wed, May 8, 2019 at 1:40 PM
To: Andrew Morton, Shakeel Butt
Cc: , ,
, Johannes Weiner, Michal Hocko, Rik van Riel,
Christoph Lameter, Vladimir Davydov, , Roman
Gushchin
> Switching to an indirect scheme of getting mem_cgroup pointer for
> !root slab pages broke
From: Roman Gushchin
Date: Wed, May 8, 2019 at 1:30 PM
To: Andrew Morton, Shakeel Butt
Cc: , ,
, Johannes Weiner, Michal Hocko, Rik van Riel,
Christoph Lameter, Vladimir Davydov, , Roman
Gushchin
> Let's separate the page counter modification code out of
> __memcg_kmem_uncharge() in
From: Roman Gushchin
Date: Wed, May 8, 2019 at 1:40 PM
To: Andrew Morton, Shakeel Butt
Cc: , ,
, Johannes Weiner, Michal Hocko, Rik van Riel,
Christoph Lameter, Vladimir Davydov, , Roman
Gushchin
> Currently the page accounting code is duplicated in SLAB and SLUB
> internals. Let'
From: Roman Gushchin
Date: Wed, May 8, 2019 at 1:30 PM
To: Andrew Morton, Shakeel Butt
Cc: , ,
, Johannes Weiner, Michal Hocko, Rik van Riel,
Christoph Lameter, Vladimir Davydov, , Roman
Gushchin
> Currently SLUB uses a work scheduled after an RCU grace period
> to deactivate a no
From: Roman Gushchin
Date: Wed, May 8, 2019 at 1:30 PM
To: Andrew Morton, Shakeel Butt
Cc: , ,
, Johannes Weiner, Michal Hocko, Rik van Riel,
Christoph Lameter, Vladimir Davydov, , Roman
Gushchin
> Initialize kmem_cache->memcg_params.memcg pointer in
> memcg_link_cache() ra
From: Roman Gushchin
Date: Wed, May 8, 2019 at 1:30 PM
To: Andrew Morton, Shakeel Butt
Cc: , ,
, Johannes Weiner, Michal Hocko, Rik van Riel,
Christoph Lameter, Vladimir Davydov, , Roman
Gushchin
> # Why do we need this?
>
> We've noticed that the number of dying cgroups is steadil
er
> Cc: Michal Hocko
> Cc: Mel Gorman
> Cc: "Kirill A . Shutemov"
> Cc: Hugh Dickins
> Signed-off-by: Yang Shi
Nice find.
Reviewed-by: Shakeel Butt
> ---
> I'm not quite sure if it was the intended behavior or just omission. I tried
> to dig into the review
gt; > static inline struct list_lru_one *
>
> Yep, I didn't expect node 0 could ever be unavailable, my bad.
> The patch looks fine to me:
>
> Acked-by: Vladimir Davydov
>
> However, I tend to agree with Michal that (ab)using node[0].memcg_lrus
> to check if a list_lru is me
emcg, explicitly add
__GFP_RETRY_MAYFAIL to the fanotigy and inotify event allocations.
Signed-off-by: Shakeel Butt
---
Changelog since v1:
- Fixed usage of __GFP_RETRY_MAYFAIL flag.
fs/notify/fanotify/fanotify.c| 5 -
fs/notify/inotify/inotify_fsnotify.c | 7 +--
2 files changed
On Mon, Apr 29, 2019 at 5:41 PM Michal Hocko wrote:
>
> On Mon 29-04-19 10:13:32, Shakeel Butt wrote:
> [...]
> > /*
> >* For queues with unlimited length lost events are not expected and
> >* can possibly have security implication
emcg, explicitly add
__GFP_RETRY_MAYFAIL to the fanotify and inotify event allocations.
Signed-off-by: Shakeel Butt
---
fs/notify/fanotify/fanotify.c| 4 +++-
fs/notify/inotify/inotify_fsnotify.c | 7 +--
2 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/fs/notify/fanotify/fa
-killer in the charging path for fanotify and inotify
event allocations.
Signed-off-by: Shakeel Butt
Acked-by: Michal Hocko
---
Changelog since v1:
- commit message updated.
mm/memcontrol.c | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
On Mon, Apr 29, 2019 at 5:22 AM Michal Hocko wrote:
>
> On Sun 28-04-19 16:56:13, Shakeel Butt wrote:
> > The documentation of __GFP_RETRY_MAYFAIL clearly mentioned that the
> > OOM killer will not be triggered and indeed the page alloc does not
> > invoke OOM killer for s
The documentation of __GFP_RETRY_MAYFAIL clearly mentioned that the
OOM killer will not be triggered and indeed the page alloc does not
invoke OOM killer for such allocations. However we do trigger memcg
OOM killer for __GFP_RETRY_MAYFAIL. Fix that.
Signed-off-by: Shakeel Butt
---
mm
On Wed, Apr 24, 2019 at 11:49 PM Michal Hocko wrote:
>
> On Tue 23-04-19 08:44:05, Shakeel Butt wrote:
> > The commit 475d0487a2ad ("mm: memcontrol: use per-cpu stocks for socket
> > memory uncharging") added refill_stock() for skmem uncharging path to
> > opti
On Wed, Apr 24, 2019 at 12:17 PM Roman Gushchin wrote:
>
> On Wed, Apr 24, 2019 at 10:23:45AM -0700, Shakeel Butt wrote:
> > Hi Roman,
> >
> > On Tue, Apr 23, 2019 at 9:30 PM Roman Gushchin wrote:
> > >
> > > Currently the page accounting code is d
Hi Roman,
On Tue, Apr 23, 2019 at 9:30 PM Roman Gushchin wrote:
>
> Currently the page accounting code is duplicated in SLAB and SLUB
> internals. Let's move it into new (un)charge_slab_page helpers
> in the slab_common.c file. These helpers will be responsible
> for statistics (global and
ned
memcgs but it may impact the performance of network traffic for the
sockets used by other cgroups.
Signed-off-by: Shakeel Butt
Cc: Roman Gushchin
Cc: Johannes Weiner
Cc: Michal Hocko
Cc: Vladimir Davydov
Cc: Andrew Morton
---
Changelog since v1:
- No need to bypass offline memcgs in the re
On Fri, Apr 19, 2019 at 1:07 PM Roman Gushchin wrote:
>
> On Thu, Apr 18, 2019 at 02:42:24PM -0700, Shakeel Butt wrote:
> > The commit 475d0487a2ad ("mm: memcontrol: use per-cpu stocks for socket
> > memory uncharging") added refill_stock() for skmem uncharging pa
501 - 600 of 1184 matches
Mail list logo