On Thu, Aug 20, 2020 at 09:03:49AM -0400, Waiman Long wrote:
> The mem_cgroup_get_max() function used to get memory+swap max from
> both the v1 memsw and v2 memory+swap page counters & return the maximum
> of these 2 values. This is redundant and it is more efficient to just
> get either the v1 or
er, the enum itself
> was not removed at that time. Remove the obsolete enum charge_type now.
>
> Signed-off-by: Waiman Long
Acked-by: Johannes Weiner
On Tue, Aug 18, 2020 at 12:18:44PM +0200, pet...@infradead.org wrote:
> What you need is a feeback loop against the rate of freeing pages, and
> when you near the saturation point, the allocation rate should exactly
> match the freeing rate.
IO throttling solves a slightly different problem.
IO o
On Thu, Aug 13, 2020 at 12:44:16PM +0200, Michal Hocko wrote:
> This smells like 3e38e0aaca9e ("mm: memcg: charge memcg percpu memory to
> the parent cgroup").
I just replied to the other thread on this issue here:
https://lore.kernel.org/lkml/20200813152033.ga701...@cmpxchg.org/
atch in linux-next up tuntil today
> is different. :-(
Sorry, I made a last-minute request to include these checks in that
patch to make the code a bit more robust, but they trigger a false
positive here. Let's remove them.
---
>From de8ea7c96c056c3cbe7b93995029986a158fb9cd Mon Sep 17 00:00
ount the consumed percpu memory to the parent cgroup.
>
> Signed-off-by: Roman Gushchin
> Acked-by: Dennis Zhou
Acked-by: Johannes Weiner
This makes sense, and the accounting is in line with how we track and
distribute child creation quotas (cgroup.max.descendants and
cgroup.max.depth) up t
>
> Signed-off-by: Roman Gushchin
> Acked-by: Dennis Zhou
Acked-by: Johannes Weiner
On Mon, Aug 03, 2020 at 11:00:33AM +0200, Michal Hocko wrote:
> On Tue 23-06-20 10:40:23, Roman Gushchin wrote:
> > @@ -5456,7 +5460,10 @@ static int mem_cgroup_move_account(struct page *page,
> > */
> > smp_mb();
> >
> > - page->mem_cgroup = to; /* caller should have done css_get */
On Fri, Jul 31, 2020 at 11:34:40AM +0800, Alex Shi wrote:
> Since readahead page will be charged on memcg too. We don't need to
> check this exception now. Rmove them is safe as all user pages are
> charged before use.
>
> Signed-off-by: Alex Shi
> Cc: Johannes Weiner
&
references along with the page (move_account), and swap
entries use the mem_cgroup_id references which pin the css indirectly.
Leaving that css_put_many behind in the swap path was an oversight.
Acked-by: Johannes Weiner
> ---
> Fixes mm-memcontrol-decouple-reference-counting-from-page-ac
high")
Signed-off-by: Johannes Weiner
---
mm/memcontrol.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 13f559af1ab6..805a44bf948c 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -6071,6 +6071,7 @@ static ssize_t memory_high_write(str
allocation context, and thereby no longer count any
limit-setting reclaim as memory pressure. If the newly set limit
causes the workload inside the cgroup into direct reclaim, that of
course will continue to count as memory pressure.
Signed-off-by: Johannes Weiner
---
mm/memcontrol.c | 12
On Tue, Jul 21, 2020 at 11:19:52AM +, jingrui wrote:
> Cc: Johannes Weiner ; Michal Hocko ;
> Vladimir Davydov
>
> Thanks.
>
> ---
> PROBLEM: cgroup cost too much memory when transfer small files to tmpfs.
>
> keywords: cgroup PERCPU/memory cost too much.
>
ling/retrieving
> the shadow entry.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Johannes Weiner
On Wed, Jun 17, 2020 at 02:26:19PM +0900, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> In current implementation, newly created or swap-in anonymous page
> is started on active list. Growing active list results in rebalancing
> active/inactive list so old pages on active list are demoted to in
On Tue, Jul 14, 2020 at 10:06:32AM -0700, Shakeel Butt wrote:
> On Tue, Jul 14, 2020 at 8:39 AM Johannes Weiner wrote:
> > The way we do this right now is having the reclaimer daemon in a
> > dedicated top-level cgroup with memory.min protection.
> >
> > This w
nefit: it unifies the reclaim behaviour between
> the two.
>
> There's precedent for this behaviour: we already do reclaim retries when
> writing to memory.{high,max}, in max reclaim, and in the page allocator
> itself.
>
> Signed-off-by: Chris Down
> Cc: Andrew Morton
> Cc: Johannes Weiner
> Cc: Tejun Heo
> Cc: Michal Hocko
Acked-by: Johannes Weiner
On Fri, Jul 10, 2020 at 12:19:37PM -0700, Shakeel Butt wrote:
> On Fri, Jul 10, 2020 at 11:42 AM Roman Gushchin wrote:
> >
> > On Fri, Jul 10, 2020 at 07:12:22AM -0700, Shakeel Butt wrote:
> > > On Fri, Jul 10, 2020 at 5:29 AM Michal Hocko wrote:
> > > >
> > > > On Thu 09-07-20 12:47:18, Roman Gu
iation")
> Signed-off-by: Michal Hocko
Good catch, thanks Michal.
Acked-by: Johannes Weiner
ly biased. Fix it to count it
> in fault code.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Johannes Weiner
it's better for this patch to be squashed into the patch
> "mm: workingset: age nonresident information alongside anonymous pages".
>
> Signed-off-by: Joonsoo Kim
Acked-by: Johannes Weiner
On Tue, Jun 16, 2020 at 03:57:50PM +0800, kernel test robot wrote:
> Greeting,
>
> FYI, we noticed a -11.5% regression of vm-scalability.throughput due to
> commit:
>
>
> commit: 1431d4d11abb265e79cd44bed2f5ea93f1bcc57b ("mm: base LRU balancing on
> an explicit cost model")
> https://git.kerne
ed to handle the div0 case.
Check the parent state explicitly to make sure we have a reasonable
positive value for the divisor.
Fixes: 8a931f801340 ("mm: memcontrol: recursive memory.low protection")
Reported-by: Tejun Heo
Cc:
Signed-off-by: Johannes Weiner
---
mm/memcontrol.c | 9 ++
On Fri, Jun 12, 2020 at 12:19:58PM +0900, Joonsoo Kim wrote:
> 2020년 6월 5일 (금) 오전 12:06, Johannes Weiner 님이 작성:
> >
> > On Thu, Jun 04, 2020 at 03:35:27PM +0200, Vlastimil Babka wrote:
> > > On 6/1/20 10:44 PM, Johannes Weiner wrote:
> > > > From a8faceabc1454df
On Tue, Jun 09, 2020 at 05:15:33PM +0800, Alex Shi wrote:
>
>
> 在 2020/6/4 上午7:03, Andrew Morton 写道:
> >
> > + /* XXX: Move to lru_cache_add() when it supports new vs putback */
>
> Hi Hannes,
>
> Sorry for a bit lost, would you like to explain a bit more of your idea here?
>
> > + spin_
On Thu, Jun 04, 2020 at 03:35:27PM +0200, Vlastimil Babka wrote:
> On 6/1/20 10:44 PM, Johannes Weiner wrote:
> > From a8faceabc1454dfd878caee2a8422493d937a394 Mon Sep 17 00:00:00 2001
> > From: Johannes Weiner
> > Date: Mon, 1 Jun 2020 14:04:09 -0400
> > Subject:
On Tue, Jun 02, 2020 at 11:34:17AM +0900, Joonsoo Kim wrote:
> 2020년 6월 2일 (화) 오전 12:56, Johannes Weiner 님이 작성:
> > On Mon, Jun 01, 2020 at 03:14:24PM +0900, Joonsoo Kim wrote:
> > > But, I still think that modified refault activation equation isn't
> > > safe. T
On Mon, Jun 01, 2020 at 11:56:17AM -0400, Johannes Weiner wrote:
> On Mon, Jun 01, 2020 at 03:14:24PM +0900, Joonsoo Kim wrote:
> > 2020년 5월 30일 (토) 오전 12:12, Johannes Weiner 님이 작성:
> > > However, your example cannot have a completely silent stable state. As
> > > we
On Mon, Jun 01, 2020 at 03:14:24PM +0900, Joonsoo Kim wrote:
> 2020년 5월 30일 (토) 오전 12:12, Johannes Weiner 님이 작성:
> >
> > On Fri, May 29, 2020 at 03:48:00PM +0900, Joonsoo Kim wrote:
> > > 2020년 5월 29일 (금) 오전 2:02, Johannes Weiner 님이 작성:
> > > > On Thu, May 28, 2
On Fri, May 29, 2020 at 03:48:00PM +0900, Joonsoo Kim wrote:
> 2020년 5월 29일 (금) 오전 2:02, Johannes Weiner 님이 작성:
> > On Thu, May 28, 2020 at 04:16:50PM +0900, Joonsoo Kim wrote:
> > > 2020년 5월 27일 (수) 오후 10:43, Johannes Weiner 님이 작성:
> > > > On Wed, May 27, 2020 at
On Thu, May 28, 2020 at 08:48:31PM +0100, Chris Down wrote:
> Shakeel Butt writes:
> > What was the initial reason to have different behavior in the first place?
>
> This differing behaviour is simply a mistake, it was never intended to be
> this deviate from what happens elsewhere. To that extent
On Thu, May 28, 2020 at 06:31:01PM +0200, Michal Hocko wrote:
> On Thu 21-05-20 14:45:05, Johannes Weiner wrote:
> > After analyzing this problem, it's clear that we had an oversight
> > here: all other reclaimers are already familiar with the fact that
> > reclaim may n
;
> Hide this function in a matching #ifdef.
>
> Fixes: 5bd144bf764c ("mm: memcontrol: drop unused try/commit/cancel charge
> API")
> Signed-off-by: Arnd Bergmann
Acked-by: Johannes Weiner
Thanks Arnd!
On Thu, May 28, 2020 at 04:16:50PM +0900, Joonsoo Kim wrote:
> 2020년 5월 27일 (수) 오후 10:43, Johannes Weiner 님이 작성:
> >
> > On Wed, May 27, 2020 at 11:06:47AM +0900, Joonsoo Kim wrote:
> > > 2020년 5월 21일 (목) 오전 8:26, Johannes Weiner 님이 작성:
> > > >
> >
On Tue, May 26, 2020 at 02:42:14PM -0700, Roman Gushchin wrote:
> @@ -257,6 +257,98 @@ struct cgroup_subsys_state *vmpressure_to_css(struct
> vmpressure *vmpr)
> }
>
> #ifdef CONFIG_MEMCG_KMEM
> +extern spinlock_t css_set_lock;
> +
> +static void obj_cgroup_release(struct percpu_ref *ref)
> +{
kes a page address. obj_to_index()
> will be a simple wrapper taking a page pointer and passing
> page_address(page) into __obj_to_index().
>
> Signed-off-by: Roman Gushchin
> Reviewed-by: Vlastimil Babka
Acked-by: Johannes Weiner
On Wed, May 27, 2020 at 11:29:58AM -0700, Shakeel Butt wrote:
> From: Johannes Weiner
>
> Currently, THP are counted as single pages until they are split right
> before being swapped out. However, at that point the VM is already in
> the middle of reclaim, and adjusting the LRU
On Tue, May 26, 2020 at 01:08:00PM -0700, Boris Burkov wrote:
> Currently, the root cgroup does not have a cpu.stat file. Add one which
> is consistent with /proc/stat to capture global cpu statistics that
> might not fall under cgroup accounting.
>
> We haven't done this in the past because the d
On Tue, May 26, 2020 at 04:01:07PM -0600, Jens Axboe wrote:
> On 5/26/20 3:59 PM, Johannes Weiner wrote:
> > On Tue, May 26, 2020 at 01:51:15PM -0600, Jens Axboe wrote:
> >> Normally waiting for a page to become unlocked, or locking the page,
> >> requires waiting for
On Wed, May 27, 2020 at 11:06:47AM +0900, Joonsoo Kim wrote:
> 2020년 5월 21일 (목) 오전 8:26, Johannes Weiner 님이 작성:
> >
> > We activate cache refaults with reuse distances in pages smaller than
> > the size of the total cache. This allows new pages with competitive
> > acce
On Tue, May 26, 2020 at 01:51:22PM -0600, Jens Axboe wrote:
> Checks if the file supports it, and initializes the values that we need.
> Caller passes in 'data' pointer, if any, and the callback function to
> be used.
>
> Signed-off-by: Jens Axboe
Acked-by: Johannes Weiner
RECT signals the same operation. Once the callback is received by
> the caller for IO completion, the caller must retry the operation.
>
> Signed-off-by: Jens Axboe
Acked-by: Johannes Weiner
s made public,
> and we define struct wait_page_async as the interface between the caller
> and the core.
>
> Signed-off-by: Jens Axboe
Acked-by: Johannes Weiner
On Tue, May 26, 2020 at 01:51:14PM -0600, Jens Axboe wrote:
> No functional changes in this patch, just in preparation for allowing
> more callers.
>
> Signed-off-by: Jens Axboe
Acked-by: Johannes Weiner
On Tue, May 26, 2020 at 01:51:13PM -0600, Jens Axboe wrote:
> The read-ahead shouldn't block, so allow it to be done even if
> IOCB_NOWAIT is set in the kiocb.
>
> Signed-off-by: Jens Axboe
Acked-by: Johannes Weiner
Looks reasonable. Especially after patch 1 - although i
n-shmem) FS")
> Cc: sta...@vger.kernel.org # v5.4+
Acked-by: Johannes Weiner
On Fri, May 22, 2020 at 09:33:35AM -0400, Qian Cai wrote:
> On Wed, May 20, 2020 at 07:25:20PM -0400, Johannes Weiner wrote:
> > Operations like MADV_FREE, FADV_DONTNEED etc. currently move any
> > affected active pages to the inactive list to accelerate their reclaim
> >
cache allocation (e.g. highmem) to radix node allocation (lowmem), but
> we don't need or usually apply that mask when charging mem_cgroup.
>
> Signed-off-by: Hugh Dickins
> ---
Acked-by: Johannes Weiner
> Mostly fixing mm-memcontrol-charge-swapin-pages-on-instantiation.pa
On Thu, May 21, 2020 at 01:06:28PM -0700, Hugh Dickins wrote:
> On Thu, 21 May 2020, Johannes Weiner wrote:
> > do_memsw_account() used to be automatically false when the cgroup
> > controller was disabled. Now that it's replaced by
> > cgroup_memory_noswap, for which
8304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
> > > > 4096000, 7962624, 11239424, 2048, 23887872, 71663616, 78675968,
> > > > 10240, 214990848
> > > > Allocating group tables:0/7453 done
> > > > W
On Thu, May 21, 2020 at 07:37:01PM +0200, Michal Hocko wrote:
> On Thu 21-05-20 12:38:33, Johannes Weiner wrote:
> > On Thu, May 21, 2020 at 04:35:15PM +0200, Michal Hocko wrote:
> > > On Thu 21-05-20 09:51:52, Johannes Weiner wrote:
> > > > On Thu, May 21, 2020 at
On Thu, May 21, 2020 at 04:35:15PM +0200, Michal Hocko wrote:
> On Thu 21-05-20 09:51:52, Johannes Weiner wrote:
> > On Thu, May 21, 2020 at 09:32:45AM +0200, Michal Hocko wrote:
> [...]
> > > I am not saying the looping over try_to_free_pages is wrong. I do care
> >
On Thu, May 21, 2020 at 09:51:55AM -0400, Johannes Weiner wrote:
> On Thu, May 21, 2020 at 09:32:45AM +0200, Michal Hocko wrote:
> > I wouldn't mind to loop over try_to_free_pages to meet the requested
> > memcg_nr_pages_over_high target.
>
> Should we do the same for gl
On Thu, May 21, 2020 at 09:32:45AM +0200, Michal Hocko wrote:
> On Wed 20-05-20 13:51:35, Johannes Weiner wrote:
> > On Wed, May 20, 2020 at 07:04:30PM +0200, Michal Hocko wrote:
> > > On Wed 20-05-20 12:51:31, Johannes Weiner wrote:
> > > > On Wed, May 20, 2020 at
They're the same function, and for the purpose of all callers they are
equivalent to lru_cache_add().
Signed-off-by: Johannes Weiner
Reviewed-by: Rik van Riel
Acked-by: Michal Hocko
Acked-by: Minchan Kim
---
fs/cifs/file.c | 10 +-
fs/fuse/dev.c| 2 +-
include/
path.
Signed-off-by: Johannes Weiner
Reviewed-by: Rik van Riel
Acked-by: Michal Hocko
Acked-by: Minchan Kim
---
mm/swap.c | 25 +++--
1 file changed, 11 insertions(+), 14 deletions(-)
diff --git a/mm/swap.c b/mm/swap.c
index bf9a79fed62d..68eae1e2787a 100644
--- a/mm
27;s being
pressured, while still allowing a generous rate of convergence when
the relative sizes of the lists need to adjust.
Signed-off-by: Johannes Weiner
---
mm/vmscan.c | 22 +-
1 file changed, 13 insertions(+), 9 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmsc
faulting of recently evicted pages.
Replace struct zone_reclaim_stat with two cost counters in the lruvec,
and make everything that affects cost go through a new lru_note_cost()
function.
v2: remove superfluous cost denominator (Minchan Kim)
improve cost variable naming (Michal Hocko)
Sig
chal Hocko)
Signed-off-by: Johannes Weiner
Acked-by: Minchan Kim
Acked-by: Michal Hocko
---
mm/vmscan.c | 8 +++-
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 6ff63906a288..2c3fb8dd1159 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2054
en thrashing on the cache, it will tip the
pressure balance inside its ancestors, and the next hierarchical
reclaim iteration will go more after the anon pages in the tree.
Signed-off-by: Johannes Weiner
---
include/linux/memcontrol.h | 13
mm/swap.c
the least amount of IO on aggregate.
Signed-off-by: Johannes Weiner
---
include/linux/swap.h | 3 +--
mm/swap.c| 11 +++
mm/swap_state.c | 5 +
mm/vmscan.c | 39 ++-
mm/workingset.c | 4
5 files changed, 27
st of
cache misses between page cache and swap-backed pages - to reflect
such situations by making the swap-preferred range configurable.
v2: clarify how to calculate swappiness (Minchan Kim)
Signed-off-by: Johannes Weiner
---
Documentation/admin-guide/sysctl/vm.rst | 23 ++-
k
that spirit, leave
explicitely deactivated pages to the LRU algorithm to pick up, and let
rotations decide which list is the easiest to reclaim.
Signed-off-by: Johannes Weiner
Acked-by: Minchan Kim
Acked-by: Michal Hocko
---
mm/swap.c | 4
1 file changed, 4 deletions(-)
diff --git a/mm/swa
't be. ap + fp is always at least 1. Drop the + 1.
Signed-off-by: Johannes Weiner
---
mm/vmscan.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 43f88b1a4f14..6cd1029ea9d4 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2348,7 +2348,7
ies in swap unless the page changes. If there is
swap-backed data that's mostly read (tmpfs file) and has been swapped
out before, we can reclaim it without incurring additional IO.
Signed-off-by: Johannes Weiner
---
include/linux/swap.h | 4 +++-
include/linux/vmstat.h | 1 +
mm/swap.c
ly as long as there is used-once cache present, and will apply
the LRU balancing when only repeatedly accessed cache pages are left -
at which point the previous use-once bias will have been
neutralized. This makes the use-once cache balancing bias unnecessary.
Signed-off-by: Johannes Weiner
), followed by vmstats, reclaim_stats, and then vm events.
Signed-off-by: Johannes Weiner
---
include/linux/vm_event_item.h | 4
mm/vmscan.c | 17 +
mm/vmstat.c | 4
3 files changed, 17 insertions(+), 8 deletions(-)
diff --git a/include/linux
iding
random IO from swap and go harder after cache. But fundamentally, hot
cache should be able to compete with anon pages for a place in RAM.
Signed-off-by: Johannes Weiner
---
mm/workingset.c | 17 -
1 file changed, 12 insertions(+), 5 deletions(-)
diff --git a/mm/workingset.c
The reclaim code that balances between swapping and cache reclaim
tries to predict likely reuse based on in-memory reference patterns
alone. This works in many cases, but when it fails it cannot detect
when the cache is thrashing pathologically, or when we're in the
middle of a swap storm.
The hig
On Wed, May 20, 2020 at 07:04:30PM +0200, Michal Hocko wrote:
> On Wed 20-05-20 12:51:31, Johannes Weiner wrote:
> > On Wed, May 20, 2020 at 06:07:56PM +0200, Michal Hocko wrote:
> > > On Wed 20-05-20 15:37:12, Chris Down wrote:
> > > > In Facebook production, we
On Wed, May 20, 2020 at 06:07:56PM +0200, Michal Hocko wrote:
> On Wed 20-05-20 15:37:12, Chris Down wrote:
> > In Facebook production, we've seen cases where cgroups have been put
> > into allocator throttling even when they appear to have a lot of slack
> > file caches which should be trivially r
On Fri, May 15, 2020 at 10:49:22AM -0700, Shakeel Butt wrote:
> On Fri, May 15, 2020 at 8:00 AM Roman Gushchin wrote:
> > On Fri, May 15, 2020 at 06:44:44AM -0700, Shakeel Butt wrote:
> > > On Fri, May 15, 2020 at 6:24 AM Johannes Weiner
> > > wrote:
> > > &
On Fri, May 15, 2020 at 10:29:55AM +0200, Michal Hocko wrote:
> On Sat 09-05-20 07:06:38, Shakeel Butt wrote:
> > On Fri, May 8, 2020 at 2:44 PM Johannes Weiner wrote:
> > >
> > > On Fri, May 08, 2020 at 10:06:30AM -0700, Shakeel Butt wrote:
> > > > One
On Wed, May 13, 2020 at 02:15:19PM -0700, Andrew Morton wrote:
> On Tue, 12 May 2020 17:29:36 -0400 Johannes Weiner wrote:
>
> >
> > ...
> >
> > Solution
> >
> > This patch fixes the aging inversion described above on
> > !CONFIG_HIGHMEM system
On Wed, May 13, 2020 at 07:24:10PM -0700, Andrew Morton wrote:
> On Tue, 12 May 2020 17:29:36 -0400 Johannes Weiner wrote:
>
> > + inode_pages_clear(mapping->inode);
> > + else if (populated == 1)
> > + inode_pages_set(mapping->inode);
>
On Wed, May 13, 2020 at 09:32:58AM +0800, Yafang Shao wrote:
> On Wed, May 13, 2020 at 5:29 AM Johannes Weiner wrote:
> >
> > On Tue, Feb 11, 2020 at 12:55:07PM -0500, Johannes Weiner wrote:
> > > The VFS inode shrinker is currently allowed to reclaim inodes with
> >
Hello Balbir!
On Wed, May 13, 2020 at 11:30:32AM +, Balbir Singh wrote:
> On Fri, May 08, 2020 at 02:30:47PM -0400, Johannes Weiner wrote:
> > To eliminate the page->mapping dependency, memcg needs to ditch its
> > private page type counters (MEMCG_CACHE, MEMCG_RSS, NR_SHME
On Thu, May 07, 2020 at 03:26:31PM -0700, Roman Gushchin wrote:
> On Thu, May 07, 2020 at 05:03:14PM -0400, Johannes Weiner wrote:
> > On Wed, Apr 22, 2020 at 01:46:55PM -0700, Roman Gushchin wrote:
> > > --- a/mm/memcontrol.c
> > > +++ b/mm/memcontrol.c
> &
On Tue, May 12, 2020 at 10:38:54AM -0400, Qian Cai wrote:
> > On May 8, 2020, at 2:30 PM, Johannes Weiner wrote:
> >
> > With the page->mapping requirement gone from memcg, we can charge anon
> > and file-thp pages in one single step, right after they're allocate
On Tue, Feb 11, 2020 at 12:55:07PM -0500, Johannes Weiner wrote:
> The VFS inode shrinker is currently allowed to reclaim inodes with
> populated page cache. As a result it can drop gigabytes of hot and
> active page cache on the floor without consulting the VM (recorded as
> "i
unters, memcg doesn't see the pages, it
only gets a count of THPs. To translate that to bytes, it has to know
how big the THPs are - and that's only available for CONFIG_THP.
Add the necessary ifdefs. /proc/meminfo, smaps etc. also don't show
the THP counters when the feature is compiled
On Mon, May 11, 2020 at 02:10:58PM -0400, Johannes Weiner wrote:
> From fc9dcaf68c8b54baf365cd670fb5780c7f0d243f Mon Sep 17 00:00:00 2001
> From: Johannes Weiner
> Date: Mon, 11 May 2020 12:59:08 -0400
> Subject: [PATCH] mm: shmem: remove rare optimization when swapin races with
>
On Mon, May 11, 2020 at 09:32:16AM -0700, Hugh Dickins wrote:
> On Mon, 11 May 2020, Johannes Weiner wrote:
> > On Mon, May 11, 2020 at 12:38:04AM -0700, Hugh Dickins wrote:
> > > On Fri, 8 May 2020, Johannes Weiner wrote:
> > > >
> > > > I looked at thi
On Thu, May 07, 2020 at 10:00:07AM -0700, Shakeel Butt wrote:
> On Thu, May 7, 2020 at 9:47 AM Michal Hocko wrote:
> >
> > On Thu 07-05-20 09:33:01, Shakeel Butt wrote:
> > [...]
> > > @@ -2600,8 +2596,23 @@ static int try_charge(struct mem_cgroup *memcg,
> > > gfp_t gfp_mask,
> > >
On Mon, May 11, 2020 at 12:38:04AM -0700, Hugh Dickins wrote:
> On Fri, 8 May 2020, Johannes Weiner wrote:
> >
> > I looked at this some more, as well as compared it to non-shmem
> > swapping. My conclusion is - and Hugh may correct me on this - that
> > the dele
.
> Also for PGLAZYFREE use the irq-unsafe function to update as the irq is
> already disabled.
>
> Fixes: 2262185c5b28 ("mm: per-cgroup memory reclaim stats")
> Signed-off-by: Shakeel Butt
Acked-by: Johannes Weiner
ady disabled.
>
> Signed-off-by: Shakeel Butt
Acked-by: Johannes Weiner
On Fri, May 08, 2020 at 02:22:15PM -0700, Shakeel Butt wrote:
> Currently update_page_reclaim_stat() updates the lruvec.reclaim_stats
> just once for a page irrespective if a page is huge or not. Fix that by
> passing the hpage_nr_pages(page) to it.
>
> Signed-off-by: Shakeel Butt
https://lore.k
ent. The
> cgroup's pgsteal contains number of reclaimed pages for global as well
> as cgroup reclaim. So, one way to get the system level stats is to get
> these stats from root's memory.stat, so, expose memory.stat for the root
> cgroup.
>
> from Johannes Weiner:
&g
The uncharge batching code adds up the anon, file, kmem counts to
determine the total number of pages to uncharge and references to
drop. But the next patches will remove the anon and file counters.
Maintain an aggregate nr_pages in the uncharge_gather struct.
Signed-off-by: Johannes Weiner
nters, thus removing the page->mapping
dependency, then complete the transition to the new single-point
charge API and delete the old transactional scheme.
v2: leave shmem swapcache when charging fails to avoid double IO (Joonsoo)
Signed-off-by: Johannes Weiner
Reviewed-by: Alex Shi
---
include/l
e: Add and replace pages using the XArray")
Signed-off-by: Johannes Weiner
Reviewed-by: Alex Shi
Reviewed-by: Shakeel Butt
Reviewed-by: Joonsoo Kim
---
mm/filemap.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/filemap.c b/mm/filemap.c
index af1c6adad5bd..2b05
"set up" and when it's "published" - somewhat vague and fluid
concepts that varied by page type. All we need is a freshly allocated
page and a memcg context to charge.
v2: prevent double charges on pre-allocated hugepages in khugepaged
Signed-off-by: Johannes Weiner
Revie
hen replace MEMCG_CACHE with NR_FILE_PAGES and delete the private
NR_SHMEM accounting sites.
Signed-off-by: Johannes Weiner
Reviewed-by: Joonsoo Kim
---
include/linux/memcontrol.h | 3 +--
mm/filemap.c | 17 +
mm/khugepaged.c| 16 +++-
mm/m
onsoo)
Signed-off-by: Johannes Weiner
---
include/linux/memcontrol.h | 3 +--
kernel/events/uprobes.c| 2 +-
mm/huge_memory.c | 2 +-
mm/khugepaged.c| 2 +-
mm/memcontrol.c| 27 --
mm/memory.c| 10
mm/migrate.c
ways have the cgroup records at swapin
time; the next patch will fix the actual bug by charging readahead
swap pages at swapin time rather than at fault time.
v2: fix double swap charge bug in cgroup1/cgroup2 code gating
Signed-off-by: Johannes Weiner
Reviewed-by: Joonsoo Kim
---
i
ilize the
page->mem_cgroup association:
- the page lock
- LRU isolation
- lock_page_memcg()
- exclusive access to the page
Signed-off-by: Johannes Weiner
Reviewed-by: Alex Shi
Reviewed-by: Joonsoo Kim
---
mm/memcontrol.c | 21 +++--
1 file changed, 7 insertions(+), 14
bove problems.
v2: simplify swapin error checking (Joonsoo)
Signed-off-by: Johannes Weiner
Reviewed-by: Alex Shi
---
mm/memory.c | 15 ++---
mm/shmem.c | 14
mm/swap_state.c | 89 ++---
mm/swapfile.c | 6
4 files changed, 67
Swapin faults were the last event to charge pages after they had
already been put on the LRU list. Now that we charge directly on
swapin, the lrucare portion of the charge code is unused.
Signed-off-by: Johannes Weiner
Reviewed-by: Joonsoo Kim
---
include/linux/memcontrol.h | 5 ++--
kernel
There are no more users. RIP in peace.
Signed-off-by: Johannes Weiner
Reviewed-by: Joonsoo Kim
---
include/linux/memcontrol.h | 36 ---
mm/memcontrol.c| 126 +
2 files changed, 15 insertions(+), 147 deletions(-)
diff --git a/include
301 - 400 of 2043 matches
Mail list logo