On 1/8/21 7:46 PM, Christoph Lameter wrote:
> I am ok with you as a slab maintainer. I have seen some good work from
> you.
>
> Acked-by: Christoph Lameter
Thanks!
Vlastimil
Signed-off-by: Johannes Berg
Acked-by: Vlastimil Babka
> ---
> Perhaps instead it should go the other way around, and kmemleak
> could even use/access the stack trace that's already in there ...
> But I don't really care too much, I can just turn off slub debug
> for the kme
On 1/12/21 5:35 PM, Christoph Lameter wrote:
> On Tue, 12 Jan 2021, Jann Horn wrote:
>
>> [This is not something I intend to work on myself. But since I
>> stumbled over this issue, I figured I should at least document/report
>> it, in case anyone is willing to pick it up.]
>
> Well yeah all true
On 1/12/21 12:12 AM, Jann Horn wrote:
> [This is not something I intend to work on myself. But since I
> stumbled over this issue, I figured I should at least document/report
> it, in case anyone is willing to pick it up.]
>
> Hi!
Hi, thanks for saving me a lot of typing!
...
> This means that
e false-positives by resetting pointer tags during these accesses.
>
> Link:
> https://linux-review.googlesource.com/id/I50dd32838a666e173fe06c3c5c766f2c36aae901
> Fixes: aa1ef4d7b3f67 ("kasan, mm: reset tags when accessing metadata")
> Reported-by: Dmitry Vyukov
> Signed-off-
On 1/13/21 5:09 PM, Johannes Berg wrote:
> From: Johannes Berg
>
> If kmemleak is enabled, it uses a kmem cache for its own objects.
> These objects are used to hold information kmemleak uses, including
> a stack trace. If slub_debug is also turned on, each of them has
> *another* stack trace, so
On 1/12/21 10:21 AM, Faiyaz Mohammed wrote:
> Reading the sys slab alloc_calls, free_calls returns the available object
> owners, but the size of this file is limited to PAGE_SIZE
> because of the limitation of sysfs attributes, it is returning the
> partial owner info, which is not sufficient to d
u_dead() hotplug callback takes the
slab_mutex.
To sum up, this patch removes get/put_online_cpus() calls from slab as it
should be safe without further adjustments.
Signed-off-by: Vlastimil Babka
---
mm/slab_common.c | 10 --
1 file changed, 10 deletions(-)
diff --git a/mm/slab_common.c b/
generally such
nodes should be movable in order to succeed hotremove in the first place, and
thus the GFP_KERNEL allocated kmem_cache_node will come from elsewhere.
[1] https://lore.kernel.org/linux-mm/20190924151147.gb23...@dhcp22.suse.cz/
Signed-off-by: Vlastimil Babka
---
mm/slub.c | 28
_mutex against races with these paths. The
problem of SLUB relying on N_NORMAL_MEMORY doesn't apply to SLAB, as its
setup_kmem_cache_nodes relies on N_ONLINE, and the new node is already set
there during the MEM_GOING_ONLINE callback, so no special care is needed
for SLAB.
As such, this pa
), but the most sane solution is not to introduce more of them, but
rather accept some wasted memory in scenarios that should be rare anyway (full
memory hot remove), as we do the same in other contexts already.
Vlastimil Babka (3):
mm, slub: stop freeing kmem_cache_node structures on node of
On 1/12/21 8:24 AM, Michal Hocko wrote:
>> > >
>> > > If we're going to do a separate "patch: make process_sysctl_arg()
>> > > return an errno instead of 0" then fine, we can discuss that. But it's
>> > > conceptually a different work from fixing this situation.
>> > > .
>> > >
>> > However, are
tch when call move_freelist_head in
> fast_isolate_freepages().
>
> Link:
> http://lkml.kernel.org/r/20190118175136.31341-12-mgor...@techsingularity.net
> Fixes: 5a811889de10f1eb ("mm, compaction: use free lists to quickly locate a
> migration target")
Sounds serious
On 1/6/21 8:09 PM, Christoph Lameter wrote:
> On Wed, 6 Jan 2021, Vlastimil Babka wrote:
>
>> rather accept some wasted memory in scenarios that should be rare anyway
>> (full
>> memory hot remove), as we do the same in other contexts already. It's all RFC
>&
On 1/8/21 8:01 PM, Paul E. McKenney wrote:
>
> Andrew pushed this to an upstream maintainer, but I have not seen these
> patches appear anywhere. So if that upstream maintainer was Linus, I can
> send a follow-up patch once we converge. If the upstream maintainer was
> in fact me, I can of cours
On 1/8/21 1:26 AM, Paul E. McKenney wrote:
> On Wed, Jan 06, 2021 at 03:42:12PM -0800, Paul E. McKenney wrote:
>> On Wed, Jan 06, 2021 at 01:48:43PM -0800, Andrew Morton wrote:
>> > On Tue, 5 Jan 2021 17:16:03 -0800 "Paul E. McKenney"
>> > wrote:
>> >
>> > > This is v4 of the series the improves
rted-by: Andrii Nakryiko
> Suggested-by: Vlastimil Babka
> Signed-off-by: Paul E. McKenney
Acked-by: Vlastimil Babka
> ---
> mm/vmalloc.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index c274ea4..e3229ff
as
> vmalloc() storage from kernel_clone() or similar, depending on the degree
> of inlining that your compiler does. This is likely more helpful than
> the earlier "non-paged (local) memory".
>
> Cc: Andrew Morton
> Cc: Joonsoo Kim
> Cc:
> Reported-by: Andrii Nakryiko
> Signed-off-by: Paul E. McKenney
Acked-by: Vlastimil Babka
Rientjes
> Cc: Joonsoo Kim
> Cc: Andrew Morton
> Cc:
> Reported-by: Andrii Nakryiko
> [ paulmck: Convert to printing and change names per Joonsoo Kim. ]
> [ paulmck: Move slab definition per Stephen Rothwell and kbuild test robot. ]
> [ paulmck: Handle CONFIG_MMU=n case wher
including the kmemcg
accounting rewrite last year.
Signed-off-by: Vlastimil Babka
---
Hi,
this might look perhaps odd with 4 people (plus Andrew) already listed, but on
closer look we have 2 (or 3 if you count SLOB) allocators and the focus of each
maintainer varies. Maybe this would
vec() was once useful in detecting a KSM
> charge bug, so may be worth keeping: but skip if mem_cgroup_disabled().
>
> Fixes: 9a1ac2288cf1 ("mm/memcontrol:rewrite mem_cgroup_page_lruvec()")
> Signed-off-by: Hugh Dickins
Acked-by: Vlastimil Babka
> ---
>
> include/lin
On 1/7/21 6:36 PM, Andrea Arcangeli wrote:
> On Thu, Jan 07, 2021 at 06:28:29PM +0100, Vlastimil Babka wrote:
>> On 1/6/21 9:18 PM, Hugh Dickins wrote:
>> > On Wed, 6 Jan 2021, Andrea Arcangeli wrote:
>> >>
>> >> I'd be surprised if the kernel can bo
On 1/6/21 9:18 PM, Hugh Dickins wrote:
> On Wed, 6 Jan 2021, Andrea Arcangeli wrote:
>>
>> I'd be surprised if the kernel can boot with BUG_ON() defined as "do
>> {}while(0)" so I guess it doesn't make any difference.
>
> I had been afraid of that too, when CONFIG_BUG is not set:
> but I think it
n why it's not safe.
Vlastimil Babka (3):
mm, slab, slub: stop taking memory hotplug lock
mm, slab, slub: stop taking cpu hotplug lock
mm, slub: stop freeing kmem_cache_node structures on node offline
mm/slab_common.c | 20 --
mm/slub.c
generally such
nodes should be movable in order to succeed hotremove in the first place, and
thus the GFP_KERNEL allocated kmem_cache_node will come from elsewhere.
[1] https://lore.kernel.org/linux-mm/20190924151147.gb23...@dhcp22.suse.cz/
Signed-off-by: Vlastimil Babka
---
mm/slub.c | 2
special care is needed
for SLAB.
As such, this patch removes all get/put_online_mems() usage by the slab
subsystem.
Signed-off-by: Vlastimil Babka
---
mm/slab_common.c | 10 --
mm/slub.c| 28 +---
2 files changed, 29 insertions(+), 9 deletions(-)
diff --git a/mm/sl
u_dead() hotplug callback takes the
slab_mutex.
To sum up, this patch removes get/put_online_cpus() calls from slab as it
should be safe without further adjustments.
Signed-off-by: Vlastimil Babka
---
mm/slab_common.c | 10 --
1 file changed, 10 deletions(-)
diff --git a/mm/slab_common.c b/
The subject should say BUILD_BUG()
On 12/30/20 4:40 PM, Arnd Bergmann wrote:
> From: Arnd Bergmann
>
> clang cannt evaluate this function argument at compile time
> when the function is not inlined, which leads to a link
> time failure:
>
> ld.lld: error: undefined symbol: __compiletime_assert_
On 12/14/20 10:16 PM, Hugh Dickins wrote:
> On Tue, 24 Nov 2020, Rik van Riel wrote:
>
>> The allocation flags of anonymous transparent huge pages can be controlled
>> through the files in /sys/kernel/mm/transparent_hugepage/defrag, which can
>> help the system from getting bogged down in the page
ep the code correct if the unions in struct page changed, such changes should
be done consciously and needed changes evaluated - the comment should help with
that.
Signed-off-by: Vlastimil Babka
---
mm/slab.c | 3 ++-
mm/slub.c | 4 ++--
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git
On 12/10/20 12:04 AM, Paul E. McKenney wrote:
>> > +/**
>> > + * kmem_valid_obj - does the pointer reference a valid slab object?
>> > + * @object: pointer to query.
>> > + *
>> > + * Return: %true if the pointer is to a not-yet-freed object from
>> > + * kmalloc() or kmem_cache_alloc(), either %tr
On 12/10/20 12:23 AM, Paul E. McKenney wrote:
> On Wed, Dec 09, 2020 at 06:51:20PM +0100, Vlastimil Babka wrote:
>> On 12/9/20 2:13 AM, paul...@kernel.org wrote:
>> > From: "Paul E. McKenney"
>> >
>> > This commit adds vmalloc() support to mem_du
On 12/9/20 2:13 AM, paul...@kernel.org wrote:
> From: "Paul E. McKenney"
>
> This commit adds vmalloc() support to mem_dump_obj(). Note that the
> vmalloc_dump_obj() function combines the checking and dumping, in
> contrast with the split between kmem_valid_obj() and kmem_dump_obj().
> The reaso
David Rientjes
> Cc: Joonsoo Kim
> Cc: Andrew Morton
> Cc:
> Reported-by: Andrii Nakryiko
> Signed-off-by: Paul E. McKenney
Acked-by: Vlastimil Babka
> ---
> mm/util.c | 7 ++-
> 1 file changed, 6 insertions(+), 1 deletion(-)
>
> diff --git a/mm/util
On 12/9/20 2:12 AM, paul...@kernel.org wrote:
> From: "Paul E. McKenney"
>
> There are kernel facilities such as per-CPU reference counts that give
> error messages in generic handlers or callbacks, whose messages are
> unenlightening. In the case of per-CPU reference-count underflow, this
> is
g into available memory, let's not complicate
things with making this optional.
> Signed-off-by: Liam Mark
> Signed-off-by: Georgi Djakov
Acked-by: Vlastimil Babka
> ---
>
> v2:
> - Improve the commit message (Andrew and Vlastimil)
> - Update page_owner.rst with more rec
meaning that at that time the page could not be migrated, but
> that has nothing to do with an EIO error.
>
> Let us return -EBUSY instead, as we do in case we failed to isolate
> the page.
>
> While are it, let us remove the "ret" print as its value does not change.
&g
On 12/9/20 8:58 AM, Dan Carpenter wrote:
> On Tue, Dec 08, 2020 at 09:01:49PM -0800, Joe Perches wrote:
>> On Tue, 2020-12-08 at 16:34 -0800, Kees Cook wrote:
>>
>> > If not "Adjusted-by", what about "Tweaked-by", "Helped-by",
>> > "Corrected-by"?
>>
>> Improved-by: / Enhanced-by: / Revisions-by:
On 12/1/20 12:35 PM, Oscar Salvador wrote:
> On Wed, Nov 25, 2020 at 07:20:33PM +0100, Vlastimil Babka wrote:
>> On 11/19/20 11:57 AM, Oscar Salvador wrote:
>> > From: Naoya Horiguchi
>> >
>> > The call to get_user_pages_fast is only to get the pointer to a
On 12/2/20 2:11 AM, Shakeel Butt wrote:
> On Tue, Dec 1, 2020 at 5:07 PM Steven Rostedt wrote:
>>
>> On Tue, 1 Dec 2020 16:36:32 -0800
>> Shakeel Butt wrote:
>>
>> > SGTM but note that usually Andrew squash all the patches into one
>> > before sending to Linus. If you plan to replace the path buf
nr_swap_pages(0) -= ngoals
> nr_swap_pages =
> -1
>
> Signed-off-by: Zhaoyang Huang
Better now.
Acked-by: Vlastimil Babka
> ---
> change of v2: fix bug of unpaired of spin_lock
> ---
> ---
> mm/swapfile.c
it.
Also adjust max_order initialization so that it's lower by one than previously,
which makes the code hopefully more clear.
> Signed-off-by: Muchun Song
Fixes: d9dddbf55667 ("mm/page_alloc: prevent merging between isolated and other
pageblocks")
Acked-by: Vlastimil Babka
Th
On 12/4/20 5:03 AM, Muchun Song wrote:
> On Fri, Dec 4, 2020 at 1:37 AM Vlastimil Babka wrote:
>>
>> On 12/2/20 1:18 PM, Muchun Song wrote:
>> > When we free a page whose order is very close to MAX_ORDER and greater
>> > than pageblock_order, it wastes some
On 12/3/20 12:36 PM, Zhaoyang Huang wrote:
> The scenario on which "Free swap -4kB" happens in my system, which is caused
> by
> get_swap_page_of_type or get_swap_pages racing with show_mem. Remove the race
> here.
>
> Signed-off-by: Zhaoyang Huang
> ---
> mm/swapfile.c | 7 +++
> 1 file
On 12/2/20 1:18 PM, Muchun Song wrote:
> When we free a page whose order is very close to MAX_ORDER and greater
> than pageblock_order, it wastes some CPU cycles to increase max_order
> to MAX_ORDER one by one and check the pageblock migratetype of that page
But we have to do that. It's not the sa
On 12/3/20 5:26 PM, David Hildenbrand wrote:
> On 03.12.20 01:03, Vlastimil Babka wrote:
>> On 12/2/20 1:21 PM, Muchun Song wrote:
>>> The max order page has no buddy page and never merge to other order.
>>> So isolating and then freeing it is pointless.
>>
On 12/3/20 3:43 AM, Muchun Song wrote:
> On Thu, Dec 3, 2020 at 8:03 AM Vlastimil Babka wrote:
>>
>> On 12/2/20 1:21 PM, Muchun Song wrote:
>> > The max order page has no buddy page and never merge to other order.
>> > So isolating and then freeing it is pointless
On 12/2/20 1:21 PM, Muchun Song wrote:
> The max order page has no buddy page and never merge to other order.
> So isolating and then freeing it is pointless.
>
> Signed-off-by: Muchun Song
Acked-by: Vlastimil Babka
> ---
> mm/page_isolation.c | 2 +-
> 1 file chang
Hi,
there was a bit of debate on Twitter about this, so I thought I would bring it
here. Imagine a scenario where patch sits as a commit in -next and there's a bug
report or fix, possibly by a bot or with some static analysis. The maintainer
decides to fold it into the original patch, which makes
On 11/30/20 2:45 PM, Michal Hocko wrote:
On Mon 30-11-20 21:36:49, Muchun Song wrote:
On Mon, Nov 30, 2020 at 9:23 PM Michal Hocko wrote:
>
> On Mon 30-11-20 21:15:12, Muchun Song wrote:
> > We found a case of kernel panic. The stack trace is as follows
> > (omit some irrelevant information):
>
On 11/27/20 8:23 PM, Souptick Joarder wrote:
On Sat, Nov 28, 2020 at 12:36 AM Vlastimil Babka wrote:
On 11/27/20 7:57 PM, Georgi Djakov wrote:
> Hi Vlastimil,
>
> Thanks for the comment!
>
> On 11/27/20 19:52, Vlastimil Babka wrote:
>> On 11/12/20 8:14 PM, Andrew Morton
On 11/27/20 7:57 PM, Georgi Djakov wrote:
Hi Vlastimil,
Thanks for the comment!
On 11/27/20 19:52, Vlastimil Babka wrote:
On 11/12/20 8:14 PM, Andrew Morton wrote:
On Thu, 12 Nov 2020 20:41:06 +0200 Georgi Djakov
wrote:
From: Liam Mark
Collect the time for each allocation recorded in
On 11/12/20 8:14 PM, Andrew Morton wrote:
On Thu, 12 Nov 2020 20:41:06 +0200 Georgi Djakov
wrote:
From: Liam Mark
Collect the time for each allocation recorded in page owner so that
allocation "surges" can be measured.
Record the pid for each allocation recorded in page owner so that
the s
>v2:
remove the inline for func declaration in shmem_fs.h
v2->v3:
make shmem_aops global, and export it to modules.
Signed-off-by: Hui Su
Acked-by: Vlastimil Babka
---
include/linux/shmem_fs.h | 6 +-
mm/shmem.c | 16 ++--
2 files changed, 11 insert
On 11/15/20 6:40 PM, Hui Su wrote:
in shmem_get_inode():
new_inode();
new_inode_pseudo();
alloc_inode();
ops->alloc_inode(); -> shmem_alloc_inode()
kmem_cache_alloc();
memset(info, 0, (char *)inode - (char *)info);
So use kmem_cache_zalloc() in shmem_alloc_inode(),
and r
t be static?
Signed-off-by: Zou Wei
Acked-by: Vlastimil Babka
---
mm/page_alloc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 63d8d8b..e7548344 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -3037,7 +3
e types
of allocation patterns because of count value not being printed in
cma_release().
We are printing the count value in the trace logs, just extend the same
to pr_debug logs too.
Signed-off-by: Charan Teja Reddy
Acked-by: Vlastimil Babka
---
mm/cma.c | 2 +-
1 file changed, 1 insertion(+),
On 11/27/20 3:19 PM, Muchun Song wrote:
Current pageblock isolation logic could isolate each pageblock individually
since commit d9dddbf55667 ("mm/page_alloc: prevent merging between isolated
and other pageblocks"). So we not need to concern about page allocator
merges buddies from different page
On 11/26/20 7:14 PM, Rik van Riel wrote:
> On Thu, 2020-11-26 at 18:18 +0100, Vlastimil Babka wrote:
>> On 11/24/20 8:49 PM, Rik van Riel wrote:
>>> Currently if thp enabled=[madvise], mounting a tmpfs filesystem
>>> with huge=always and mmapping files from that t
On 11/24/20 8:49 PM, Rik van Riel wrote:
Currently if thp enabled=[madvise], mounting a tmpfs filesystem
with huge=always and mmapping files from that tmpfs does not
result in khugepaged collapsing those mappings, despite the
mount flag indicating that it should.
Fix that by breaking up the bloc
ill be a little
more aggressive than today for files mmapped with MADV_HUGEPAGE,
and a little less aggressive for files that are not mmapped or
mapped without that flag.
Signed-off-by: Rik van Riel
Acked-by: Vlastimil Babka
On 11/26/20 12:22 PM, Vlastimil Babka wrote:
On 11/26/20 8:24 AM, Yu Zhao wrote:
On Thu, Nov 26, 2020 at 02:39:03PM +0800, Alex Shi wrote:
在 2020/11/26 下午12:52, Yu Zhao 写道:
>> */
>> void __pagevec_lru_add(struct pagevec *pvec)
>> {
>> - int i;
>> -
On 11/26/20 3:25 AM, Alex Shi wrote:
在 2020/11/26 上午7:43, Andrew Morton 写道:
On Tue, 24 Nov 2020 12:21:28 +0100 Vlastimil Babka wrote:
On 11/22/20 3:00 PM, Alex Shi wrote:
Thanks a lot for all comments, I picked all up and here is the v3:
From 167131dd106a96fd08af725df850e0da6ec899af Mon
On 11/19/20 11:57 AM, Oscar Salvador wrote:
get_hwpoison_page already drains pcplists, previously disabling
them when trying to grab a refcount.
We do not need shake_page to take care of it anymore.
Signed-off-by: Oscar Salvador
---
mm/memory-failure.c | 7 ++-
1 file changed, 2 insertio
soft_offline and memory_failure paths that is guarded by
zone_pcplist_disable/zone_pcplist_enable.
[1]
https://patchwork.kernel.org/project/linux-mm/cover/2020092812.11329-1-vba...@suse.cz/
Signed-off-by: Oscar Salvador
Acked-by: Vlastimil Babka
Note as you say the series should go after [1] above
On 11/26/20 8:24 AM, Yu Zhao wrote:
On Thu, Nov 26, 2020 at 02:39:03PM +0800, Alex Shi wrote:
在 2020/11/26 下午12:52, Yu Zhao 写道:
>> */
>> void __pagevec_lru_add(struct pagevec *pvec)
>> {
>> - int i;
>> - struct lruvec *lruvec = NULL;
>> + int i, nr_lruvec;
>>unsigned
On 11/26/20 4:12 AM, Alex Shi wrote:
在 2020/11/25 下午11:38, Vlastimil Babka 写道:
On 11/20/20 9:27 AM, Alex Shi wrote:
The current relock logical will change lru_lock when found a new
lruvec, so if 2 memcgs are reading file or alloc page at same time,
they could hold the lru_lock alternately
On 11/19/20 11:57 AM, Oscar Salvador wrote:
From: Naoya Horiguchi
The call to get_user_pages_fast is only to get the pointer to a struct
page of a given address, pinning it is memory-poisoning handler's job,
so drop the refcount grabbed by get_user_pages_fast().
Note that the target page is st
get_any_page and __get_any_page, and let the message
be printed in soft_offline_page.
Signed-off-by: Oscar Salvador
Acked-by: Vlastimil Babka
On 11/19/20 11:57 AM, Oscar Salvador wrote:
pfn parameter is no longer needed, drop it.
Could have been also part of previous patch.
Signed-off-by: Oscar Salvador
Acked-by: Vlastimil Babka
---
mm/memory-failure.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git
took the page off a buddy freelist 2) the page was
in-use and we migrated it 3) was a clean pagecache.
Because of that, a page cannot longer be poisoned and be in a pcplist.
Signed-off-by: Oscar Salvador
Acked-by: Vlastimil Babka
---
mm/madvise.c | 5 -
1 file changed, 5 deletions
On 10/13/20 4:44 PM, Oscar Salvador wrote:
Currently, free hugetlb get dissolved, but we also need to make sure
to take the poisoned subpage off the buddy frelists, so no one stumbles
upon it (see previous patch for more information).
Signed-off-by: Oscar Salvador
Acked-by: Vlastimil Babka
we should be on the safe
side.
[1] https://lore.kernel.org/linux-mm/20190826104144.GA7849@linux/T/#u
[2] https://patchwork.kernel.org/cover/11792607/
Signed-off-by: Oscar Salvador
Acked-by: Naoya Horiguchi
Makes a lot of sense.
Acked-by: Vlastimil Babka
---
mm/memory-failure.c | 27
pcplists whenever we find this kind of page and retry
the check again. It might be that pcplists have been spilled into the
buddy allocator and so we can handle it.
Signed-off-by: Oscar Salvador
Acked-by: Naoya Horiguchi
Acked-by: Vlastimil Babka
---
mm/memory-failure.c | 24
ges isolated for compaction will be cleared
Cc: Andrew Morton
Cc: Alexander Potapenko
Cc: Michal Hocko
Cc: Mike Kravetz
Cc: Vlastimil Babka
Cc: Mike Rapoport
Cc: Oscar Salvador
Cc: Kees Cook
Cc: Michael Ellerman
Signed-off-by: David Hildenbrand
Acked-by: Vlastimil Babka
---
This is
On 11/20/20 9:27 AM, Alex Shi wrote:
The current relock logical will change lru_lock when found a new
lruvec, so if 2 memcgs are reading file or alloc page at same time,
they could hold the lru_lock alternately, and wait for each other for
fairness attribute of ticket spin lock.
This patch will
should we use a smaller factor(<10%) in
previous formula.
Signed-off-by: Lin Feng
Acked-by: Vlastimil Babka
---
init/main.c | 2 --
mm/page_alloc.c | 3 +++
2 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/init/main.c b/init/main.c
index 20baced721ad..a3f7c341628
On 11/25/20 4:46 AM, Matthew Wilcox (Oracle) wrote:
Code outside mm/ should not be calling free_unref_page(). Also
move free_unref_page_list().
Good idea.
Signed-off-by: Matthew Wilcox (Oracle)
Acked-by: Vlastimil Babka
There seems to be some effort to remove "extern" fro
andle pgtable_page_ctor() fail")
Signed-off-by: Matthew Wilcox (Oracle)
Acked-by: Vlastimil Babka
---
arch/sparc/mm/init_64.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 96edf64d4fb3..182bb7bdaa0a 100644
--- a/arch/sparc/
On 11/25/20 6:34 AM, Andrea Arcangeli wrote:
Hello,
On Mon, Nov 23, 2020 at 02:01:16PM +0100, Vlastimil Babka wrote:
On 11/21/20 8:45 PM, Andrea Arcangeli wrote:
> A corollary issue was fixed in
> 39639000-39814fff : Unknown E820 type
>
> pfn 0x7a200 -> 0x7a20 min_
Please CC linux-api on future versions.
On 10/26/20 5:05 PM, Topi Miettinen wrote:
Writing a new value of 3 to /proc/sys/kernel/randomize_va_space
enables full randomization of memory mappings created with mmap(NULL,
...). With 2, the base of the VMA used for such mappings is random,
but the map
On 11/23/20 4:10 PM, Charan Teja Kalla wrote:
Thanks Michal!
On 11/23/2020 7:43 PM, Michal Hocko wrote:
On Mon 23-11-20 19:33:16, Charan Teja Reddy wrote:
When the pages are failed to get isolate or migrate, the page owner
information along with page info is dumped. If there are continuous
fai
ed-by: Vlastimil Babka
---
include/linux/compaction.h | 12
mm/compaction.c| 8
2 files changed, 4 insertions(+), 16 deletions(-)
diff --git a/include/linux/compaction.h b/include/linux/compaction.h
index 1de5a1151ee7..ed4070ed41ef 100644
--- a/include/
wouldn't mind if the goto stayed, but it's not repeating that much
without it (list_move() + continue, 3 times) so...
Acked-by: Vlastimil Babka
Signed-off-by: Alex Shi
Cc: Andrew Morton
Cc: Matthew Wilcox
Cc: Hugh Dickins
Cc: Yu Zhao
Cc: Vlastimil Babka
Cc: Michal Hocko
Cc:
+CC John Hubbard
On 11/20/20 9:27 PM, Pavel Tatashin wrote:
Recently, I encountered a hang that is happening during memory hot
remove operation. It turns out that the hang is caused by pinned user
pages in ZONE_MOVABLE.
Kernel expects that all pages in ZONE_MOVABLE can be migrated, but
this is
On 11/21/20 8:45 PM, Andrea Arcangeli wrote:
A corollary issue was fixed in
e577c8b64d58fe307ea4d5149d31615df2d90861. A second issue remained in
v5.7:
https://lkml.kernel.org/r/8c537eb7-85ee-4dcf-943e-3cc0ed0df...@lca.pw
==
page:eaaa refcount:1 mapcount:0 mapping:224
when determining the
mininum objects, thereby increasing the chances of chosing
a lower conservative page order for the slab.
Signed-off-by: Bharata B Rao
Acked-by: Vlastimil Babka
Ideally, we would react to hotplug events and update existing caches
accordingly. But for that, recalculation of
On 11/13/20 1:10 PM, David Hildenbrand wrote:
@@ -1186,12 +1194,12 @@ void clear_free_pages(void)
if (WARN_ON(!(free_pages_map)))
return;
- if (IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) || want_init_on_free()) {
+ if (page_poisoning_enabled() || want_init_on_free())
We can use the same mechanism to instead poison free pages with PAGE_POISON
after resume. This covers both zero and 0xAA patterns. Thus we can remove the
Kconfig restriction that disables page poison sanity checking when hibernation
is enabled.
Signed-off-by: Vlastimil Babka
Acked-by: Rafael J. Wysock
hecking it back on alloc. Thus, remove this option and suggest
init_on_free instead in the main config's help.
Signed-off-by: Vlastimil Babka
Acked-by: David Hildenbrand
---
drivers/virtio/virtio_balloon.c | 4 +---
mm/Kconfig.debug| 15 ---
mm/page_poison.c
This results in a simpler and more
effective code.
Signed-off-by: Vlastimil Babka
Reviewed-by: David Hildenbrand
Reviewed-by: Mike Rapoport
---
include/linux/mm.h | 20 ++-
init/main.c| 2 +-
mm/page_alloc.c| 88 ++
3 files ch
us, remove the CONFIG_PAGE_POISONING_ZERO option for
being redundant.
Signed-off-by: Vlastimil Babka
Acked-by: David Hildenbrand
---
include/linux/poison.h | 4
mm/Kconfig.debug | 12
mm/page_alloc.c | 8 +---
tools/include/linux/poi
oc support. Move the check to
init_mem_debugging_and_hardening() to enable a single static key instead of
having two static branches in page_poisoning_enabled_static().
Signed-off-by: Vlastimil Babka
---
drivers/virtio/virtio_balloon.c | 2 +-
include/linux/mm.h | 33 ++
rnel.org/r/20201026173358.14704-1-vba...@suse.cz
[2] https://lore.kernel.org/linux-mm/20201103152237.9853-1-vba...@suse.cz/
Vlastimil Babka (5):
mm, page_alloc: do not rely on the order of page_poison and
init_on_alloc/free parameters
mm, page_poison: use static key more efficiently
kernel/po
On 11/11/20 6:58 PM, David Hildenbrand wrote:
On 11.11.20 10:28, Vlastimil Babka wrote:
- /*
-* per-cpu pages are drained after start_isolate_page_range, but
-* if there are still pages that are not free, make sure that we
-* drain
below.
8<
From cae1e8ccfa57c28ed1b2f5f8a47319b86cbdcfbf Mon Sep 17 00:00:00 2001
From: Vlastimil Babka
Date: Thu, 12 Nov 2020 15:33:07 +0100
Subject: [PATCH] kernel/power: allow hibernation with page_poison sanity
checking-fix
Adapt to __kernel_unpoison_pages fixup. S
On 11/11/20 4:38 PM, David Hildenbrand wrote:
On 03.11.20 16:22, Vlastimil Babka wrote:
Commit 11c9c7edae06 ("mm/page_poison.c: replace bool variable with static key")
changed page_poisoning_enabled() to a static key check. However, the function
is not inlined, so each check still
inin
Cc: Jann Horn
Cc: Mel Gorman
Cc: Johannes Weiner
Cc: Matthew Wilcox
Cc: Hugh Dickins
Cc: cgro...@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: linux...@kvack.org
Acked-by: Vlastimil Babka
Duyck
Signed-off-by: Alex Shi
Acked-by: Hugh Dickins
Acked-by: Johannes Weiner
Acked-by: Vlastimil Babka
Cc: Johannes Weiner
Cc: Andrew Morton
Cc: Thomas Gleixner
Cc: Andrey Ryabinin
Cc: Matthew Wilcox
Cc: Mel Gorman
Cc: Konstantin Khlebnikov
Cc: Hugh Dickins
Cc: Tejun Heo
Cc: linux
On 11/5/20 9:55 AM, Alex Shi wrote:
This patch moves per node lru_lock into lruvec, thus bring a lru_lock for
each of memcg per node. So on a large machine, each of memcg don't
have to suffer from per node pgdat->lru_lock competition. They could go
fast with their self lru_lock.
After move memcg
201 - 300 of 1435 matches
Mail list logo