On Mon, Jun 17, 2019 at 6:45 PM Andrew Morton wrote:
>
> On Mon, 17 Jun 2019 06:23:07 -0700 Shakeel Butt wrote:
>
> > > Here is a patch to use CSS_TASK_ITER_PROCS.
> > >
> > > From 415e52cf55bc4ad931e4f005421b827f0b02693d Mon Sep 17 00:00:00 2001
> > >
s. It can hoard the LRU spinlock while skipping over 100s of
GiBs of pages.
This patch only fixes the (1). The (2) needs a more fundamental solution.
Fixes: e716f2eb24de ("mm, vmscan: prevent kswapd sleeping prematurely
due to mismatched classzone_idx")
Signed-off-by: Shakeel Butt
---
mm
On Tue, Jun 25, 2019 at 11:38 PM Michal Hocko wrote:
>
> On Mon 24-06-19 14:26:30, Shakeel Butt wrote:
> > oom_unkillable_task() can be called from three different contexts i.e.
> > global OOM, memcg OOM and oom_score procfs interface. At the moment
> > oo
On Tue, Jun 25, 2019 at 11:55 PM Michal Hocko wrote:
>
> On Mon 24-06-19 14:26:31, Shakeel Butt wrote:
> > The commit ef08e3b4981a ("[PATCH] cpusets: confine oom_killer to
> > mem_exclusive cpuset") introduces a heuristic where a potential
> > oom-killer vic
and use
mem_cgroup_scan_tasks to selectively traverse only processes of the target
memcg hierarchy during memcg OOM.
Signed-off-by: Shakeel Butt
Acked-by: Michal Hocko
Acked-by: Roman Gushchin
Cc: Johannes Weiner
Cc: Tetsuo Handa
Cc: Vladimir Davydov
Cc: David Rientjes
Cc: KOSAKI Motohiro
Cc
remove the
task_in_mem_cgroup() check altogether.
Signed-off-by: Shakeel Butt
Signed-off-by: Tetsuo Handa
Acked-by: Michal Hocko
Acked-by: Roman Gushchin
Cc: David Rientjes
Cc: Johannes Weiner
Cc: KOSAKI Motohiro
Cc: Nick Piggin
Cc: Paul Jackson
Cc: Vladimir Davydov
Cc: Andrew Morton
t/mempolicy intersection check from
oom_unkillable_task() and make sure cpuset/mempolicy intersection check is
only done in the global oom context.
Signed-off-by: Shakeel Butt
Reported-by: syzbot+d0fc9d3c166bc5e4a...@syzkaller.appspotmail.com
Acked-by: Michal Hocko
Acked-by: Roman Gushchin
Cc:
On Fri, Jun 28, 2019 at 11:53 AM Yang Shi wrote:
>
>
>
> On 6/27/19 6:55 PM, Shakeel Butt wrote:
> > On production we have noticed hard lockups on large machines running
> > large jobs due to kswaps hoarding lru lock within isolate_lru_pages when
> > sc->recla
On Sat, Jun 29, 2019 at 7:05 AM Alexey Dobriyan wrote:
>
> > - if (flags & SLAB_PANIC)
> > - panic("Cannot create slab %s size=%u realsize=%u order=%u
> > offset=%u flags=%lx\n",
> > - s->name, s->size, s->size,
> > - oo_order(s->oo),
he waker has requested.
Fixes: e716f2eb24de ("mm, vmscan: prevent kswapd sleeping prematurely
due to mismatched classzone_idx")
Signed-off-by: Shakeel Butt
---
Changelog since v1:
- fixed the patch based on Yang Shi's comment.
mm/vmscan.c | 27 +++
1 file changed
On Mon, Jul 1, 2019 at 5:51 PM Henry Burns wrote:
>
> __SetPageMovable() expects it's page to be locked, but z3fold.c doesn't
> lock the page. Following zsmalloc.c's example we call trylock_page() and
> unlock_page(). Also makes z3fold_page_migrate() assert that newpage is
> passed in locked, as
22.41036-1-henrybu...@google.com
> Signed-off-by: Henry Burns
> Suggested-by: Vitaly Wool
> Acked-by: Vitaly Wool
> Acked-by: David Rientjes
> Cc: Shakeel Butt
> Cc: Vitaly Vul
> Cc: Mike Rapoport
> Cc: Xidong Wang
> Cc: Jonathan Adams
> Cc:
> Signed-off-b
On Wed, Jun 5, 2019 at 3:06 AM Hui Zhu wrote:
>
> As a zpool_driver, zsmalloc can allocate movable memory because it
> support migate pages.
> But zbud and z3fold cannot allocate movable memory.
>
Cc: Vitaly
It seems like z3fold does support page migration but z3fold's malloc
is rejecting
On Wed, Jun 5, 2019 at 10:14 AM Roman Gushchin wrote:
>
> On Tue, Jun 04, 2019 at 09:35:02PM -0700, Shakeel Butt wrote:
> > On Tue, Jun 4, 2019 at 7:45 PM Roman Gushchin wrote:
> > >
> > > Johannes noticed that reading the memcg kmem_cache pointer in
> > &
On Wed, Jun 5, 2019 at 3:06 AM Hui Zhu wrote:
>
> This is the third version that was updated according to the comments
> from Sergey Senozhatsky https://lkml.org/lkml/2019/5/29/73 and
> Shakeel Butt https://lkml.org/lkml/2019/6/4/973
>
> zswap compresses swap pages into a dyn
iver support allocate movable memory, set it to true.
> And add zpool_malloc_support_movable check malloc_support_movable
> to make sure if a zpool support allocate movable memory.
>
> Signed-off-by: Hui Zhu
Reviewed-by: Shakeel Butt
IMHO no need to block this series on z3fold query.
&g
On Sun, Jun 2, 2019 at 2:47 AM Hui Zhu wrote:
>
> This is the second version that was updated according to the comments
> from Sergey Senozhatsky in https://lkml.org/lkml/2019/5/29/73
>
> zswap compresses swap pages into a dynamically allocated RAM-based
> memory pool. The memory pool should be
mp_rmb() to be paired with smp_wmb() in
> memcg_create_kmem_cache().
>
> The same applies to memcg_create_kmem_cache() itself,
> which reads the same value without barriers and READ_ONCE().
>
> Suggested-by: Johannes Weiner
> Signed-off-by: Roman Gushchin
Reviewed-by:
On Tue, Jul 2, 2019 at 11:03 PM Vitaly Wool wrote:
>
> On Tue, Jul 2, 2019 at 6:57 PM Henry Burns wrote:
> >
> > On Tue, Jul 2, 2019 at 12:45 AM Vitaly Wool wrote:
> > >
> > > Hi Henry,
> > >
> > > On Mon, Jul 1, 2019 at 8:31 PM Henry Burns wrote:
> > > >
> > > > Running z3fold stress testing
On Wed, Jun 19, 2019 at 7:46 AM Waiman Long wrote:
>
> There are concerns about memory leaks from extensive use of memory
> cgroups as each memory cgroup creates its own set of kmem caches. There
> is a possiblity that the memcg kmem caches may remain even after the
> memory cgroup removal.
On Wed, Jun 19, 2019 at 8:30 AM Waiman Long wrote:
>
> On 6/19/19 11:18 AM, Shakeel Butt wrote:
> > On Wed, Jun 19, 2019 at 7:46 AM Waiman Long wrote:
> >> There are concerns about memory leaks from extensive use of memory
> >> cgroups as each memory cgroup crea
On Wed, Jun 19, 2019 at 3:50 PM Dave Hansen wrote:
>
> I have a bit of a grievance to file. :)
>
> I'm seeing "Cannot create slab..." panic()s coming from
> kmem_cache_open() when trying to create memory cgroups on a Fedora
> system running 5.2-rc's. The panic()s happen when failing to create
>
this behavior. So, to keep the behavior consistent between
SLAB and SLUB, removing the panic for memcg kmem cache creation
failures. The root kmem cache creation failure for SLAB_PANIC correctly
panics for both SLAB and SLUB.
Reported-by: Dave Hansen
Signed-off-by: Shakeel Butt
---
mm/slub.c | 4
1
34 1 1
> xfs_inode 89:dead 23 34 1 1
> xfs_inode 85 4 34 1 1
> xfs_inode 84 9 34 1 1
>
> The css id of the memcg is also listed. If a memcg is not online,
> the tag &
On Thu, Jun 20, 2019 at 7:24 AM Waiman Long wrote:
>
> On 6/19/19 7:48 PM, Shakeel Butt wrote:
> > Hi Waiman,
> >
> > On Wed, Jun 19, 2019 at 10:16 AM Waiman Long wrote:
> >> There are concerns about memory leaks from extensive use of memory
> >> cgroups
On Wed, Jun 19, 2019 at 10:50 PM Michal Hocko wrote:
>
> On Wed 19-06-19 16:25:14, Shakeel Butt wrote:
> > Currently for CONFIG_SLUB, if a memcg kmem cache creation is failed and
> > the corresponding root kmem cache has SLAB_PANIC flag, the kernel will
> > be cras
ach_cpu_mask+0x49/0x70
> [ 381.346287] softirqs last enabled at (10262): []
> cgroup_idr_replace+0x3a/0x50
> [ 381.346290] softirqs last disabled at (10260): []
> cgroup_idr_replace+0x1d/0x50
> [ 381.346293] ---[ end trace b324ba73eb3659f0 ]---
>
> Reported-by: Andrei Vagin
> S
On Mon, May 4, 2020 at 9:06 AM Michal Hocko wrote:
>
> On Mon 04-05-20 08:35:57, Shakeel Butt wrote:
> > On Mon, May 4, 2020 at 8:00 AM Michal Hocko wrote:
> > >
> > > On Mon 04-05-20 07:53:01, Shakeel Butt wrote:
> [...]
> > > > I am trying to
On Mon, May 4, 2020 at 11:30 PM Dave Chinner wrote:
>
> On Tue, Apr 28, 2020 at 10:27:32PM -0400, Johannes Weiner wrote:
> > On Wed, Apr 29, 2020 at 07:47:34AM +1000, Dave Chinner wrote:
> > > On Tue, Apr 28, 2020 at 12:13:46PM -0400, Dan Schatzberg wrote:
> > > > This patch series does some
> >
On Tue, May 5, 2020 at 12:13 AM Michal Hocko wrote:
>
> On Mon 04-05-20 12:23:51, Shakeel Butt wrote:
> [...]
> > *Potentially* useful for debugging versus actually beneficial for
> > "sweep before tear down" use-case.
>
> I definitely do not want to preve
On Tue, May 5, 2020 at 8:27 AM Johannes Weiner wrote:
>
> On Mon, May 04, 2020 at 12:23:51PM -0700, Shakeel Butt wrote:
> > On Mon, May 4, 2020 at 9:06 AM Michal Hocko wrote:
> > > I really hate to repeat myself but this is no different from a regular
> > > oom s
On Fri, May 15, 2020 at 6:24 AM Johannes Weiner wrote:
>
> On Fri, May 15, 2020 at 10:29:55AM +0200, Michal Hocko wrote:
> > On Sat 09-05-20 07:06:38, Shakeel Butt wrote:
> > > On Fri, May 8, 2020 at 2:44 PM Johannes Weiner wrote:
> > > >
> > > > On F
On Fri, May 15, 2020 at 8:00 AM Roman Gushchin wrote:
>
> On Fri, May 15, 2020 at 06:44:44AM -0700, Shakeel Butt wrote:
> > On Fri, May 15, 2020 at 6:24 AM Johannes Weiner wrote:
> > >
> > > On Fri, May 15, 2020 at 10:29:55AM +0200, Michal Hocko wrote:
> > &
On Fri, May 15, 2020 at 11:09 AM Roman Gushchin wrote:
>
> On Fri, May 15, 2020 at 10:49:22AM -0700, Shakeel Butt wrote:
> > On Fri, May 15, 2020 at 8:00 AM Roman Gushchin wrote:
> > >
> > > On Fri, May 15, 2020 at 06:44:44AM -0700, Shakeel Butt wrote:
> >
On Fri, May 15, 2020 at 11:09 AM Johannes Weiner wrote:
>
> On Fri, May 15, 2020 at 10:49:22AM -0700, Shakeel Butt wrote:
> > On Fri, May 15, 2020 at 8:00 AM Roman Gushchin wrote:
> > > On Fri, May 15, 2020 at 06:44:44AM -0700, Shakeel Butt wrote:
> > > > On Fri,
GFP_KERNEL for each individual page allocation
and thus there is no need to have any fallback after vzalloc.
Signed-off-by: Shakeel Butt
---
net/packet/af_packet.c | 15 ---
1 file changed, 4 insertions(+), 11 deletions(-)
diff --git a/net/packet/af_packet.c b/net/packet/af_packet.c
Many of the callbacks called by pagevec_lru_move_fn() do not correctly
update the vmstats for huge pages. Fix that. Also __pagevec_lru_add_fn()
use the irq-unsafe alternative to update the stat as the irqs are
already disabled.
Signed-off-by: Shakeel Butt
---
mm/swap.c | 14 --
1
ready disabled.
Fixes: 2262185c5b28 ("mm: per-cgroup memory reclaim stats")
Signed-off-by: Shakeel Butt
---
mm/swap.c | 17 -
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index 3dbef6517cac..4eb179ee0b72 100644
--- a/mm/swap.c
+++ b/mm
Currently update_page_reclaim_stat() updates the lruvec.reclaim_stats
just once for a page irrespective if a page is huge or not. Fix that by
passing the hpage_nr_pages(page) to it.
Signed-off-by: Shakeel Butt
---
mm/swap.c | 20 ++--
1 file changed, 10 insertions(+), 10
On Fri, May 8, 2020 at 2:51 PM Johannes Weiner wrote:
>
> On Fri, May 08, 2020 at 02:22:15PM -0700, Shakeel Butt wrote:
> > Currently update_page_reclaim_stat() updates the lruvec.reclaim_stats
> > just once for a page irrespective if a page is huge or not. Fix t
On Fri, May 8, 2020 at 2:44 PM Johannes Weiner wrote:
>
> On Fri, May 08, 2020 at 10:06:30AM -0700, Shakeel Butt wrote:
> > One way to measure the efficiency of memory reclaim is to look at the
> > ratio (pgscan+pfrefill)/pgsteal. However at the moment these stats are
> >
the fixup
from the splitting path.
Signed-off-by: Johannes Weiner
Signed-off-by: Shakeel Butt
---
Revived the patch from https://lore.kernel.org/patchwork/patch/685703/
mm/swap.c | 23 +--
1 file changed, 9 insertions(+), 14 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index
u vmevents before releasing
memcg")
Fixes: c350a99ea2b1 ("mm: memcontrol: flush percpu vmstats before releasing
memcg")
Signed-off-by: Shakeel Butt
Cc: Roman Gushchin
Cc: Michal Hocko
Cc: Johannes Weiner
Cc: Vladimir Davydov
Cc:
Cc: Andrew Morton
---
mm/memcontrol.c | 12
On Mon, May 11, 2020 at 2:11 PM Andrew Morton wrote:
>
> On Sat, 9 May 2020 07:19:46 -0700 Shakeel Butt wrote:
>
> > Currently, THP are counted as single pages until they are split right
> > before being swapped out. However, at that point the VM is already in
>
On Mon, May 11, 2020 at 8:57 AM Johannes Weiner wrote:
>
> On Thu, May 07, 2020 at 10:00:07AM -0700, Shakeel Butt wrote:
> > On Thu, May 7, 2020 at 9:47 AM Michal Hocko wrote:
> > >
> > > On Thu 07-05-20 09:33:01, Shakeel Butt wrote:
> > > [...]
&g
On Mon, May 11, 2020 at 2:58 PM Andrew Morton wrote:
>
> On Mon, 11 May 2020 14:38:23 -0700 Shakeel Butt wrote:
>
> > On Mon, May 11, 2020 at 2:11 PM Andrew Morton
> > wrote:
> > >
> > > On Sat, 9 May 2020 07:19:46 -0700 Shakeel Butt
> > >
memcg or to system work queue.
Signed-off-by: Shakeel Butt
---
mm/memcontrol.c | 63 +
1 file changed, 37 insertions(+), 26 deletions(-)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 317dbbaac603..7abb762f26cd 100644
--- a/mm/memcontrol.c
+++ b
On Thu, May 7, 2020 at 9:47 AM Michal Hocko wrote:
>
> On Thu 07-05-20 09:33:01, Shakeel Butt wrote:
> [...]
> > @@ -2600,8 +2596,23 @@ static int try_charge(struct mem_cgroup *memcg,
> > gfp_t gfp_mask,
> >
to get the system level stats is to get
these stats from root's memory.stat but root does not expose that
interface. Also for !CONFIG_MEMCG machines /proc/vmstat is the only way
to get these stats. So, make these stats consistent.
Signed-off-by: Shakeel Butt
---
mm/vmscan.c | 6 ++
1 file
On Fri, May 8, 2020 at 3:34 AM Yafang Shao wrote:
>
> On Fri, May 8, 2020 at 4:49 AM Shakeel Butt wrote:
> >
> > One way to measure the efficiency of memory reclaim is to look at the
> > ratio (pgscan+pfrefill)/pgsteal. However at the moment these stats are
>
On Fri, May 8, 2020 at 6:38 AM Johannes Weiner wrote:
>
> On Fri, May 08, 2020 at 06:25:14AM -0700, Shakeel Butt wrote:
> > On Fri, May 8, 2020 at 3:34 AM Yafang Shao wrote:
> > >
> > > On Fri, May 8, 2020 at 4:49 AM Shakeel Butt wrote:
> > > >
hierarchy currently has to know about these intricacies and
translate semantics back and forth.
Generally having the fully recursive memory.stat at the root
level could help a broader range of usecases.
Signed-off-by: Shakeel Butt
Suggested-by: Johannes Weiner
---
mm
On Sat, May 16, 2020 at 1:40 PM David Miller wrote:
>
> From: Shakeel Butt
> Date: Fri, 15 May 2020 19:17:36 -0700
>
> > and thus there is no need to have any fallback after vzalloc.
>
> This statement is false.
>
> The virtual mapping allocation or the p
On Sat, May 16, 2020 at 3:45 PM Eric Dumazet wrote:
>
> On Sat, May 16, 2020 at 3:35 PM Shakeel Butt wrote:
> >
> > On Sat, May 16, 2020 at 1:40 PM David Miller wrote:
> > >
> > > From: Shakeel Butt
> > > Date: Fri, 15 May 2020 19:17:36 -0700
> &
On Sat, May 16, 2020 at 4:39 PM David Miller wrote:
>
> From: Shakeel Butt
> Date: Sat, 16 May 2020 15:35:46 -0700
>
> > So, my argument is if non-zero order vzalloc has failed (allocations
> > internal to vzalloc, including virtual mapping allocation and page
> >
;mm/z3fold.c: use kref to prevent page free/compact
> race")
>
> Signed-off-by: Henry Burns
Reviewed-by: Shakeel Butt
> Cc:
> ---
> mm/z3fold.c | 9 -
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.
c: support page migration")
>
> Signed-off-by: Henry Burns
Reviewed-by: Shakeel Butt
> Cc:
> ---
> mm/z3fold.c | 5 -
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/mm/z3fold.c b/mm/z3fold.c
> index 43de92f52961..ed19d98c9dcd 100644
> --
-by: Shakeel Butt
---
drivers/base/node.c| 4 ++--
fs/proc/meminfo.c | 4 ++--
include/linux/memcontrol.h | 2 --
include/linux/mmzone.h | 8
kernel/fork.c | 29 ++---
kernel/scs.c | 2 +-
mm/memcontrol.c
ent the option 2. Please ignore
the fine details as I am more interested in getting the feedback on the
proposal the interface options.
Signed-off-by: Shakeel Butt
---
fs/kernfs/dir.c | 20 +++
include/linux/cgroup-defs.h | 2 ++
include/linux/kernfs.h |
more simple and robust. It also will allow to guard some checks which
> otherwise would stay unguarded.
>
> Signed-off-by: Roman Gushchin
Reviewed-by: Shakeel Butt
On Thu, Jul 2, 2020 at 11:35 PM Michal Hocko wrote:
>
> On Thu 02-07-20 08:22:22, Shakeel Butt wrote:
> [...]
> > Interface options:
> > --
> >
> > 1) memcg interface e.g. 'echo 10M > memory.reclaim'
> >
> > + simple
> > + c
On Fri, Jul 3, 2020 at 8:50 AM Roman Gushchin wrote:
>
> On Fri, Jul 03, 2020 at 07:23:14AM -0700, Shakeel Butt wrote:
> > On Thu, Jul 2, 2020 at 11:35 PM Michal Hocko wrote:
> > >
> > > On Thu 02-07-20 08:22:22, Shakeel Butt wrote:
> &g
he reclaim behaviour between
> the two.
>
> There's precedent for this behaviour: we already do reclaim retries when
> writing to memory.{high,max}, in max reclaim, and in the page allocator
> itself.
>
> Signed-off-by: Chris Down
> Cc: Andrew Morton
> Cc: Johannes Weiner
> Cc: Tejun Heo
> Cc: Michal Hocko
Reviewed-by: Shakeel Butt
On Mon, Jul 6, 2020 at 2:38 PM Roman Gushchin wrote:
>
> On Fri, Jul 03, 2020 at 09:27:19AM -0700, Shakeel Butt wrote:
> > On Fri, Jul 3, 2020 at 8:50 AM Roman Gushchin wrote:
> > >
> > > On Fri, Jul 03, 2020 at 07:23:14AM -0700, Shakeel Butt wrote:
> > > &
On Tue, Jul 7, 2020 at 5:14 AM Michal Hocko wrote:
>
> On Fri 03-07-20 07:23:14, Shakeel Butt wrote:
> > On Thu, Jul 2, 2020 at 11:35 PM Michal Hocko wrote:
> > >
> > > On Thu 02-07-20 08:22:22, Shakeel Butt wrote:
> &g
On Tue, Jul 7, 2020 at 10:36 AM Roman Gushchin wrote:
>
> charge_slab_page() is not using the gfp argument anymore,
> remove it.
>
> Signed-off-by: Roman Gushchin
Reviewed-by: Shakeel Butt
() respectively.
>
> Signed-off-by: Roman Gushchin
Reviewed-by: Shakeel Butt
gnificantly exceeds the
> cost of a jump. However, the conversion makes the code look more
> logically.
>
> Signed-off-by: Roman Gushchin
Reviewed-by: Shakeel Butt
On Mon, Jun 29, 2020 at 8:24 PM Roman Gushchin wrote:
>
> On Mon, Jun 29, 2020 at 05:44:13PM -0700, Shakeel Butt wrote:
> > Currently the kernel stack is being accounted per-zone. There is no need
> > to do that. In addition due to being per-zone, memcg has to k
localize the kernel stack stats updates to
account_kernel_stack().
Signed-off-by: Shakeel Butt
---
Changes since v1:
- Use lruvec based stat update functions based on Roman's suggestion.
drivers/base/node.c| 4 +--
fs/proc/meminfo.c | 4 +--
include/linux/memcontrol.h | 21
On Mon, Jun 29, 2020 at 4:48 PM Dave Hansen wrote:
>
> I've been sitting on these for too long. Tha main purpose of this
> post is to have a public discussion with the other folks who are
> interested in this functionalty and converge on a single
> implementation.
>
> This set directly
On Tue, Jun 30, 2020 at 11:51 AM Dave Hansen wrote:
>
> On 6/30/20 11:36 AM, Shakeel Butt wrote:
> >> This is part of a larger patch set. If you want to apply these or
> >> play with them, I'd suggest using the tree from here. It includes
> >> autonuma-base
mory_high_write is reclaiming. With this change
> the reclaim here might be just playing never ending catch up. On the
> plus side a break out from the reclaim loop would just enforce the limit
> so if the operation takes too long then the reclaim burden will move
> over to consumers event
On Fri, Jul 10, 2020 at 11:42 AM Roman Gushchin wrote:
>
> On Fri, Jul 10, 2020 at 07:12:22AM -0700, Shakeel Butt wrote:
> > On Fri, Jul 10, 2020 at 5:29 AM Michal Hocko wrote:
> > >
> > > On Thu 09-07-20 12:47:18, Roman Gushchin wrote:
> > > >
, update pgrefill only for global reclaim. If someone is interested in
the stats representing both system level as well as memcg level reclaim,
then consult the root memcg's memory.stat instead of /proc/vmstat.
Signed-off-by: Shakeel Butt
---
mm/vmscan.c | 3 ++-
1 file changed, 2 insertions(+), 1
On Fri, Jul 10, 2020 at 7:32 PM Roman Gushchin wrote:
>
> On Fri, Jul 10, 2020 at 06:14:59PM -0700, Shakeel Butt wrote:
> > The vmstat pgrefill is useful together with pgscan and pgsteal stats to
> > measure the reclaim efficiency. However vmstat's pgrefill is not update
On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote:
>
> From: Johannes Weiner
>
> The reference counting of a memcg is currently coupled directly to how
> many 4k pages are charged to it. This doesn't work well with Roman's
> new slab controller, which maintains pools of objects and doesn't
Not sure if my email went through, so, re-sending.
On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote:
>
> From: Johannes Weiner
>
[...]
> @@ -3003,13 +3004,16 @@ void __memcg_kmem_uncharge_page(struct page *page,
> int order)
> */
> void mem_cgroup_split_huge_fixup(struct page *head)
> {
On Thu, Jun 18, 2020 at 6:08 PM Roman Gushchin wrote:
>
> On Thu, Jun 18, 2020 at 07:55:35AM -0700, Shakeel Butt wrote:
> > Not sure if my email went through, so, re-sending.
> >
> > On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote:
> >
m_cgroup_charge(), dropped mem_cgroup_try_charge() part
> 2) I've reformatted commit references in the commit log to make
>checkpatch.pl happy.
>
> Signed-off-by: Johannes Weiner
> Signed-off-by: Roman Gushchin
> Acked-by: Roman Gushchin
Reviewed-by: Shakeel Butt
by: Roman Gushchin
> Reviewed-by: Vlastimil Babka
This is a very satisfying patch.
Reviewed-by: Shakeel Butt
Hi SeongJae,
On Mon, Jun 22, 2020 at 1:42 AM SeongJae Park wrote:
>
> Last week, this patchset received 5 'Reviewed-by' tags, but no further
> comments
> for changes. I updated the documentation, but the change is only small. For
> the reason, I'm only asking more reviews rather than posting
On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote:
>
> Deprecate memory.kmem.slabinfo.
>
> An empty file will be presented if corresponding config options are
> enabled.
>
> The interface is implementation dependent, isn't present in cgroup v2,
> and is generally useful only for core mm
On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote:
>
> Because the number of non-root kmem_caches doesn't depend on the
> number of memory cgroups anymore and is generally not very big,
> there is no more need for a dedicated workqueue.
>
> Also, as there is no more need to pass any arguments
shchin
> Reviewed-by: Vlastimil Babka
Reviewed-by: Shakeel Butt
structure.
>
> Signed-off-by: Roman Gushchin
> Reviewed-by: Vlastimil Babka
Reviewed-by: Shakeel Butt
On Mon, Jun 22, 2020 at 10:40 AM Roman Gushchin wrote:
>
> On Mon, Jun 22, 2020 at 10:29:29AM -0700, Shakeel Butt wrote:
> > On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote:
> > >
> > > Because the number of non-root kmem_caches doesn't depend on the
> >
On Mon, Jun 22, 2020 at 11:02 AM Roman Gushchin wrote:
>
> On Mon, Jun 22, 2020 at 10:12:46AM -0700, Shakeel Butt wrote:
> > On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote:
> > >
> > > Deprecate memory.kmem.slabinfo.
> > >
> > > An empt
On Mon, Jun 22, 2020 at 11:25 AM Roman Gushchin wrote:
>
> On Mon, Jun 22, 2020 at 11:09:47AM -0700, Shakeel Butt wrote:
> > On Mon, Jun 22, 2020 at 11:02 AM Roman Gushchin wrote:
> > >
> > > On Mon, Jun 22, 2020 at 10:12:46AM -0700, Shakeel Butt wrote:
> >
enerate a better code.
>
> Signed-off-by: Roman Gushchin
> Reviewed-by: Vlastimil Babka
Reviewed-by: Shakeel Butt
On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote:
>
> Instead of having two sets of kmem_caches: one for system-wide and
> non-accounted allocations and the second one shared by all accounted
> allocations, we can use just one.
>
> The idea is simple: space for obj_cgroup metadata can be
On Mon, Jun 22, 2020 at 1:37 PM Roman Gushchin wrote:
>
> On Mon, Jun 22, 2020 at 12:21:28PM -0700, Shakeel Butt wrote:
> > On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote:
> > >
> > > Instead of having two sets of kmem_caches: one for system-wide and
&
On Mon, Jun 22, 2020 at 2:15 PM Roman Gushchin wrote:
>
> On Mon, Jun 22, 2020 at 02:04:29PM -0700, Shakeel Butt wrote:
> > On Mon, Jun 22, 2020 at 1:37 PM Roman Gushchin wrote:
> > >
> > > On Mon, Jun 22, 2020 at 12:21:28PM -0700, Shakeel Butt wrote:
> > >
On Mon, Jun 22, 2020 at 2:58 PM Roman Gushchin wrote:
>
> On Mon, Jun 22, 2020 at 02:28:54PM -0700, Shakeel Butt wrote:
> > On Mon, Jun 22, 2020 at 2:15 PM Roman Gushchin wrote:
> > >
> > > On Mon, Jun 22, 2020 at 02:04:29PM -0700, Shakeel Butt wrote:
> >
can run
> for a very long time given a large process. This commit therefore adds
> a cond_resched() to this loop, providing RCU any needed quiescent states.
>
> Cc: Andrew Morton
> Cc:
> Signed-off-by: Paul E. McKenney
We have exactly the same change in our internal kernel s
On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote:
>
> Obj_cgroup API provides an ability to account sub-page sized kernel
> objects, which potentially outlive the original memory cgroup.
>
> The top-level API consists of the following functions:
> bool obj_cgroup_tryget(struct obj_cgroup
On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote:
>
> Allocate and release memory to store obj_cgroup pointers for each
> non-root slab page. Reuse page->mem_cgroup pointer to store a pointer
> to the allocated space.
>
> To distinguish between obj_cgroups and memcg pointers in case
> when
patchset!
>
> Andrew, can you, please, squash the following fix based on Shakeel's
> suggestions?
> Thanks!
>
> --
For the following squashed into the original patch:
Reviewed-by: Shakeel Butt
>
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 7ed3af71
ut it will be simplified
> by next commits in the series.
>
> Signed-off-by: Roman Gushchin
> Reviewed-by: Vlastimil Babka
One nit below otherwise:
Reviewed-by: Shakeel Butt
> ---
[snip]
> +static inline void memcg_slab_post_alloc_hook(struct kmem_cache *s,
> +
On Fri, Jun 19, 2020 at 5:25 PM Roman Gushchin wrote:
>
> On Fri, Jun 19, 2020 at 09:36:16AM -0700, Shakeel Butt wrote:
> > On Mon, Jun 8, 2020 at 4:07 PM Roman Gushchin wrote:
> > >
> > > Allocate and release memory to store obj_cgroup pointers for each
> &
701 - 800 of 1184 matches
Mail list logo