On 9/23/20 5:26 PM, David Hildenbrand wrote:
> On 23.09.20 16:31, Vlastimil Babka wrote:
>> On 9/16/20 9:31 PM, David Hildenbrand wrote:
>>
>
> Hi Vlastimil,
>
>> I see the point, but I don't think the head/tail mechanism is great for
>> this.
detect that the new behavior is undesireable for
> __free_pages_core() during boot, we can let the caller specify the
> behavior.
>
> Cc: Andrew Morton
> Cc: Alexander Duyck
> Cc: Mel Gorman
> Cc: Michal Hocko
> Cc: Dave Hansen
> Cc: Vlastimil Babka
> Cc: Wei Ya
ck
> Cc: Mel Gorman
> Cc: Michal Hocko
> Cc: Dave Hansen
> Cc: Vlastimil Babka
> Cc: Wei Yang
> Cc: Oscar Salvador
> Cc: Mike Rapoport
> Cc: Scott Cheloha
> Cc: Michael Ellerman
> Signed-off-by: David Hildenbrand
> ---
&
., alloc_contig_range(),
> memory onlining, memory offlining).
>
> Cc: Andrew Morton
> Cc: Alexander Duyck
> Cc: Mel Gorman
> Cc: Michal Hocko
> Cc: Dave Hansen
> Cc: Vlastimil Babka
> Cc: Wei Yang
> Cc: Oscar Salvador
> Cc: Mike Rapoport
> Cc: Scott C
gt; be good enough for internal purposes.
>
> Cc: Andrew Morton
> Cc: Alexander Duyck
> Cc: Mel Gorman
> Cc: Michal Hocko
> Cc: Dave Hansen
> Cc: Vlastimil Babka
> Cc: Wei Yang
> Cc: Oscar Salvador
> Cc: Mike Rapoport
> Signed-off-by: David Hildenbrand
Revie
On 9/16/20 9:31 PM, David Hildenbrand wrote:
>
>
>> Am 16.09.2020 um 20:50 schrieb osalva...@suse.de:
>>
>> On 2020-09-16 20:34, David Hildenbrand wrote:
>>> When adding separate memory blocks via add_memory*() and onlining them
>>> immediately, the metadata (especially the memmap) of the next
-by: Vlastimil Babka
---
include/linux/mmzone.h | 6 ++
mm/page_alloc.c| 16 ++--
2 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 90721f3156bc..7ad3f14dbe88 100644
--- a/include/linux/mmzone.h
+++ b
-aeaa-ff24-260b-36427fac9...@suse.cz/
Vlastimil Babka (9):
mm, page_alloc: clean up pageset high and batch update
mm, page_alloc: calculate pageset high and batch once per zone
mm, page_alloc: remove setup_pageset()
mm, page_alloc: simplify pageset_update()
mm, page_alloc: make per_cpu_pageset a
moved wrappers was:
build_all_zonelists_init()
setup_pageset()
pageset_set_batch()
which was hardcoding batch as 0, so we can just open-code a call to
pageset_update() with constant parameters instead.
No functional change.
Signed-off-by: Vlastimil Babka
Reviewed-by: Oscar Salvador
---
mm/page_al
zone_pageset_init() and __zone_pcp_update()
wrappers.
No functional change.
Signed-off-by: Vlastimil Babka
Reviewed-by: Oscar Salvador
Reviewed-by: David Hildenbrand
---
mm/page_alloc.c | 42 ++
1 file changed, 18 insertions(+), 24 deletions(-)
diff --git a/mm
ext, where we
want to make sure no isolated pages are left behind on pcplists.
Signed-off-by: Vlastimil Babka
---
include/linux/gfp.h | 1 +
mm/memory_hotplug.c | 4 ++--
mm/page_alloc.c | 29 -
3 files changed, 23 insertions(+), 11 deletions(-)
diff --git a/incl
t unnecessary read tearing, but mainly to alert anybody
making future changes to the code that special care is needed.
Signed-off-by: Vlastimil Babka
---
mm/page_alloc.c | 40 ++--
1 file changed, 18 insertions(+), 22 deletions(-)
diff --git a/mm/page_
ilently rely on operations that can be changed
in the future. Make sure only properly initialized pcplists are visible, using
smp_store_release(). The read side has a data dependency via the zone->pageset
pointer instead of an explicit read barrier.
Signed-off-by: Vlastimil Babka
---
mm/pa
: Pavel Tatashin
Signed-off-by: Vlastimil Babka
---
mm/memory_hotplug.c | 11 ++-
mm/page_alloc.c | 2 ++
mm/page_isolation.c | 10 +-
3 files changed, 13 insertions(+), 10 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 9db80ee29caa..08f729922e18
functional change.
Signed-off-by: Vlastimil Babka
---
mm/page_alloc.c | 17 ++---
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 26069c8d1b19..76c2b4578723 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5823,7 +5823,7 @@ static
m.
Currently the only user of this functionality is offline_pages().
[1]
https://lore.kernel.org/linux-mm/20200903140032.380431-1-pasha.tatas...@soleen.com/
Suggested-by: David Hildenbrand
Suggested-by: Michal Hocko
Signed-off-by: Vlastimil Babka
---
include/linux/mmzone.h | 2 ++
include
On 9/10/20 1:30 PM, Oscar Salvador wrote:
> On Mon, Sep 07, 2020 at 06:36:27PM +0200, Vlastimil Babka wrote:
>*/
>> -static void setup_pageset(struct per_cpu_pageset *p);
>> +static void pageset_init(struct per_cpu_pageset *p);
>
> this belongs to the respectiv
On 9/10/20 11:23 AM, Oscar Salvador wrote:
> On Mon, Sep 07, 2020 at 06:36:26PM +0200, Vlastimil Babka wrote:
>> We initialize boot-time pagesets with setup_pageset(), which sets high and
>> batch values that effectively disable pcplists.
>>
>> We can remove this w
On 9/10/20 10:31 AM, Oscar Salvador wrote:
> On Mon, Sep 07, 2020 at 06:36:24PM +0200, Vlastimil Babka wrote:
>
>> -/*
>> - * pageset_set_high() sets the high water mark for hot per_cpu_pagelist
>> - * to the value high for the pageset p.
>> - */
>>
On 9/10/20 12:29 PM, David Hildenbrand wrote:
> On 09.09.20 13:55, Vlastimil Babka wrote:
>> On 9/9/20 1:36 PM, Michal Hocko wrote:
>>> On Wed 09-09-20 12:48:54, Vlastimil Babka wrote:
>>>> Here's a version that will apply on top of next-20200908. The first 4
&g
On 9/9/20 11:52 PM, Matthew Wilcox wrote:
> On Wed, Sep 09, 2020 at 10:47:24PM +0100, Chris Down wrote:
>> Vlastimil Babka writes:
>> > - Exit also on other signals such as SIGABRT, SIGTERM? If I write to
>> > drop_caches
>> > and think it's too long, I
On 9/9/20 5:20 PM, zangchun...@bytedance.com wrote:
> From: Chunxin Zang
>
> On our server, there are about 10k memcg in one machine. They use memory
> very frequently. When I tigger drop caches,the process will infinite loop
> in drop_slab_node.
>
> There are two reasons:
> 1.We have too many m
On 9/9/20 1:36 PM, Michal Hocko wrote:
> On Wed 09-09-20 12:48:54, Vlastimil Babka wrote:
>> Here's a version that will apply on top of next-20200908. The first 4
>> patches need no change.
>>
>> 8<
>> >From 8febc17272b8e8b378e2e5ea5e76b
On 9/8/20 8:29 PM, David Hildenbrand wrote:
> On 07.09.20 18:36, Vlastimil Babka wrote:
>> As per the discussions [1] [2] this is an attempt to implement David's
>> suggestion that page isolation should disable pcplists to avoid races. This
>> is
>> done without
Here's a version that will apply on top of next-20200908. The first 4 patches
need no change.
8<
>From 8febc17272b8e8b378e2e5ea5e76b2616f029c5b Mon Sep 17 00:00:00 2001
From: Vlastimil Babka
Date: Mon, 7 Sep 2020 17:20:39 +0200
Subject: [PATCH] mm, page_alloc: disable pcpl
On 9/8/20 2:16 PM, Alexander Potapenko wrote:
>> Toggling a static branch is AFAIK quite disruptive (PeterZ will probably tell
>> you better), and with the default 100ms sample interval, I'd think it's not
>> good
>> to toggle it so often? Did you measure what performance would you get, if the
>>
On 9/8/20 5:31 PM, Marco Elver wrote:
>>
>> How much memory overhead does this end up having? I know it depends on
>> the object size and so forth. But, could you give some real-world
>> examples of memory consumption? Also, what's the worst case? Say I
>> have a ton of worst-case-sized (32b)
On 9/8/20 5:09 PM, Chris Down wrote:
> drop_caches by its very nature can be extremely performance intensive -- if
> someone wants to abort after trying too long, they can just send a
> TASK_KILLABLE signal, no? If exiting the loop and returning to usermode
> doesn't
> reliably work when doing
t_trans_huge().
>
> Signed-off-by: Wei Yang
Other than that, seems like it leads to less shifting, so
Acked-by: Vlastimil Babka
> ---
> mm/huge_memory.c | 4 ++--
> mm/mmap.c| 8
> 2 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/mm
On 8/27/20 2:06 PM, Jim Baxter wrote:
> Has anyone any ideas of how to investigate this delay further?
>
> Comparing the perf output for unplugging the USB stick and using umount
> which does not cause these delays in other workqueues the main difference
I don't have that much insight in this, bu
On 9/7/20 3:40 PM, Marco Elver wrote:
> This adds the Kernel Electric-Fence (KFENCE) infrastructure. KFENCE is a
> low-overhead sampling-based memory safety error detector of heap
> use-after-free, invalid-free, and out-of-bounds access errors. This
> series enables KFENCE for the x86 and arm64 ar
__zone_pcp_update() wrappers.
No functional change.
Signed-off-by: Vlastimil Babka
---
mm/page_alloc.c | 40 +---
1 file changed, 17 insertions(+), 23 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0b516208afda..f669a251f654 100644
--- a/mm
: Vlastimil Babka
---
mm/page_alloc.c | 13 +++--
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index f669a251f654..a0cab2c6055e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5902,7 +5902,7 @@ build_all_zonelists_init(void
moved wrappers was:
build_all_zonelists_init()
setup_pageset()
pageset_set_batch()
which was hardcoding batch as 0, so we can just open-code a call to
pageset_update() with constant parameters instead.
No functional change.
Signed-off-by: Vlastimil Babka
---
mm/page_alloc.c
asha.tatas...@soleen.com/
Vlastimil Babka (5):
mm, page_alloc: clean up pageset high and batch update
mm, page_alloc: calculate pageset high and batch once per zone
mm, page_alloc(): remove setup_pageset()
mm, page_alloc: cache pageset high and batch in struct zone
mm, page_alloc: disabl
Signed-off-by: Vlastimil Babka
---
include/linux/mmzone.h | 2 ++
mm/page_alloc.c| 18 +-
2 files changed, 15 insertions(+), 5 deletions(-)
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 8379432f4f2f..15582ca368b9 100644
--- a/include/linux/mmzo
be racy and lead to missing some
cpu's to drain. If others agree, this can be separated and potentially
backported.
[1]
https://lore.kernel.org/linux-mm/20200903140032.380431-1-pasha.tatas...@soleen.com/
Suggested-by: David Hildenbrand
Suggested-by: Michal Hocko
Signed-off-by: Vlas
On 9/3/20 8:23 PM, Pavel Tatashin wrote:
>>
>> As expressed in reply to v2, I dislike this hack. There is strong
>> synchronization, just PCP is special. Allocating from MIGRATE_ISOLATE is
>> just plain ugly.
>>
>> Can't we temporarily disable PCP (while some pageblock in the zone is
>> isolated, w
gt;list_add(&page->lru, &pcp->lists[migratetype]);
> // add new page to already drained pcp list
>
> Thread#2
> Never drains pcp again, and therefore gets stuck in the loop.
>
> The fix is to try to drain per-cpu lists again after
> check_pages_isolated_cb() fails.
>
> Signed-off-by: Pavel Tatashin
> Cc: sta...@vger.kernel.org
Fixes: ?
Acked-by: Vlastimil Babka
Thanks.
On 9/3/20 10:40 AM, Alex Shi wrote:
>
>
> 在 2020/9/3 下午4:32, Alex Shi 写道:
>>>
>> I have run thpscale with 'always' defrag setting of THP. The Amean stddev is
>> much
>> larger than a very little average run time reducing.
>>
>> But the left patch 4 could show the cmpxchg retry reduce from thous
On 9/2/20 7:25 PM, Mike Kravetz wrote:
> On 9/2/20 3:49 AM, Vlastimil Babka wrote:
>> On 9/1/20 3:46 AM, Wei Yang wrote:
>>> The page allocated from buddy is not on any list, so just use list_add()
>>> is enough.
>>>
>>> Signed-off-by: Wei Yang
>&
On 9/2/20 5:13 PM, Michal Hocko wrote:
> On Wed 02-09-20 16:55:05, Vlastimil Babka wrote:
>> On 9/2/20 4:26 PM, Pavel Tatashin wrote:
>> > On Wed, Sep 2, 2020 at 10:08 AM Michal Hocko wrote:
>> >>
>> >> >
>> >> >
On 9/2/20 4:26 PM, Pavel Tatashin wrote:
> On Wed, Sep 2, 2020 at 10:08 AM Michal Hocko wrote:
>>
>> >
>> > Thread#1 - continue
>> > free_unref_page_commit
>> >migratetype = get_pcppage_migratetype(page);
>> > // get old migration type
>> >list_add(&p
On 9/2/20 4:31 PM, Pavel Tatashin wrote:
>> > > The fix is to try to drain per-cpu lists again after
>> > > check_pages_isolated_cb() fails.
>>
>> Still trying to wrap my head around this but I think this is not a
>> proper fix. It should be the page isolation to make sure no races are
>> possible
On 9/1/20 3:46 AM, Wei Yang wrote:
> The page allocated from buddy is not on any list, so just use list_add()
> is enough.
>
> Signed-off-by: Wei Yang
> Reviewed-by: Baoquan He
> Reviewed-by: Mike Kravetz
> ---
> mm/hugetlb.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --
On 8/28/20 6:47 PM, Pavel Tatashin wrote:
> There appears to be another problem that is related to the
> cgroup_mutex -> mem_hotplug_lock deadlock described above.
>
> In the original deadlock that I described, the workaround is to
> replace crash dump from piping to Linux traditional save to file
On 9/1/20 4:50 AM, Alex Shi wrote:
> pageblock_flags is used as long, since every pageblock_flags is just 4
> bits, 'long' size will include 8(32bit machine) or 16 pageblocks' flags,
> that flag setting has to sync in cmpxchg with 7 or 15 other pageblock
> flags. It would cause long waiting for syn
On 8/19/20 10:09 AM, Alex Shi wrote:
>
>
> 在 2020/8/19 下午3:57, Anshuman Khandual 写道:
>>
>>
>> On 08/19/2020 11:17 AM, Alex Shi wrote:
>>> Current pageblock_flags is only 4 bits, so it has to share a char size
>>> in cmpxchg when get set, the false sharing cause perf drop.
>>>
>>> If we incrase
On 8/26/20 7:12 AM, Joonsoo Kim wrote:
> 2020년 8월 25일 (화) 오후 6:43, Vlastimil Babka 님이 작성:
>>
>>
>> On 8/25/20 6:59 AM, js1...@gmail.com wrote:
>> > From: Joonsoo Kim
>> >
>> > memalloc_nocma_{save/restore} APIs can be used to skip page allocation
&g
On 8/25/20 6:59 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> memalloc_nocma_{save/restore} APIs can be used to skip page allocation
> on CMA area, but, there is a missing case and the page on CMA area could
> be allocated even if APIs are used. This patch handles this case to fix
> the p
On 7/30/20 11:34 AM, David Hildenbrand wrote:
> Let's clean it up a bit, simplifying error handling and getting rid of
> the label.
Nit: the label was already removed by patch 1/6?
> Reviewed-by: Baoquan He
> Reviewed-by: Pankaj Gupta
> Cc: Andrew Morton
> Cc: Michal Hocko
> Cc: Michael S. Ts
ash has already been seen, so it's
> a good trade-off.
>
> Reported-by: Qian Cai
> Suggested-by: Matthew Wilcox
> Cc: Vlastimil Babka
> Cc: Kirill A. Shutemov
> Signed-off-by: John Hubbard
Acked-by: Vlastimil Babka
> ---
> Hi,
>
> I'm assuming that a
On 8/6/20 3:48 PM, Matthew Wilcox wrote:
> On Thu, Aug 06, 2020 at 01:45:11PM +0200, Vlastimil Babka wrote:
>> How about this additional patch now that we have head_mapcoun()? (I wouldn't
>> go for squashing as the goal and scope is too different).
>
> I like it. It bo
On 8/6/20 5:39 PM, Matthew Wilcox wrote:
>> >> +++ b/mm/huge_memory.c
>> >> @@ -2125,7 +2125,7 @@ static void __split_huge_pmd_locked(struct
>> >> vm_area_struct *vma, pmd_t *pmd,
>> >>* Set PG_double_map before dropping compound_mapcount to avoid
>> >>* false-negative page_mapped().
>> >>
On 7/2/20 10:32 AM, Xunlei Pang wrote:
> The node list_lock in count_partial() spend long time iterating
> in case of large amount of partial page lists, which can cause
> thunder herd effect to the list_lock contention, e.g. it cause
> business response-time jitters when accessing "/proc/slabinfo"
On 8/4/20 7:12 PM, Matthew Wilcox wrote:
> On Tue, Aug 04, 2020 at 07:02:14PM +0200, Vlastimil Babka wrote:
>> > 2) There was a proposal from Matthew Wilcox:
>> > https://lkml.org/lkml/2020/7/31/1015
>> >
>> >
>> > On non-RT, we could make that lo
idea how much it helps in practice wrt security, but implementation-wise it
seems fine, so:
Acked-by: Vlastimil Babka
Maybe you don't want to warn just once, though? We had similar discussion on
cache_to_obj().
> ---
> mm/slab.c | 14 --
> 1 file changed, 12 insertions
rability.pdf
>
> Fixes: 598a0717a816 ("mm/slab: validate cache membership under freelist
> hardening")
> Signed-off-by: Kees Cook
Acked-by: Vlastimil Babka
> ---
> init/Kconfig | 9 +
> 1 file changed, 5 insertions(+), 4 deletions(-)
>
> diff --git
On 8/3/20 6:30 PM, Uladzislau Rezki (Sony) wrote:
> Some background and kfree_rcu()
> ===
> The pointers to be freed are stored in the per-cpu array to improve
> performance, to enable an easier-to-use API, to accommodate vmalloc
> memmory and to support a single argumen
suboptimal but it doesn't cause any problem.
>
> Suggested-by: Michal Hocko
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
> ---
> include/linux/hugetlb.h | 2 ++
> mm/gup.c| 17 -
> 2 files changed, 10 insertions(+), 9
On 8/4/20 4:35 AM, Cho KyongHo wrote:
> On Mon, Aug 03, 2020 at 05:45:55PM +0200, Vlastimil Babka wrote:
>> On 8/3/20 9:57 AM, David Hildenbrand wrote:
>> > On 03.08.20 08:10, pullip@samsung.com wrote:
>> >> From: Cho KyongHo
>> >>
>>
On 8/3/20 9:57 AM, David Hildenbrand wrote:
> On 03.08.20 08:10, pullip@samsung.com wrote:
>> From: Cho KyongHo
>>
>> LPDDR5 introduces rank switch delay. If three successive DRAM accesses
>> happens and the first and the second ones access one rank and the last
>> access happens on the other
stem is not using benefits offered by the pcp lists when there is a
> single onlineable memory block in a zone. Correct this by always
> updating the pcp lists when memory block is onlined.
>
> Signed-off-by: Charan Teja Reddy
Makes sense to me.
Acked-by: Vlastimil Babka
>
On 7/21/20 2:05 PM, Matthew Wilcox wrote:
> On Tue, Jul 21, 2020 at 12:28:49PM +0900, js1...@gmail.com wrote:
>> +static inline unsigned int current_alloc_flags(gfp_t gfp_mask,
>> +unsigned int alloc_flags)
>> +{
>> +#ifdef CONFIG_CMA
>> +unsigned int pflags
sts for exactly this purpose.
> Fixes: d7fefcc8de91 (mm/cma: add PF flag to force non cma alloc)
> Cc:
> Signed-off-by: Joonsoo Kim
Reviewed-by: Vlastimil Babka
Thanks!
On 7/17/20 10:10 AM, Vlastimil Babka wrote:
> On 7/17/20 9:29 AM, Joonsoo Kim wrote:
>> 2020년 7월 16일 (목) 오후 4:45, Vlastimil Babka 님이 작성:
>>>
>>> On 7/16/20 9:27 AM, Joonsoo Kim wrote:
>>> > 2020년 7월 15일 (수) 오후 5:24, Vlastimil Babka 님이 작성:
>>> &
On 7/17/20 9:29 AM, Joonsoo Kim wrote:
> 2020년 7월 16일 (목) 오후 4:45, Vlastimil Babka 님이 작성:
>>
>> On 7/16/20 9:27 AM, Joonsoo Kim wrote:
>> > 2020년 7월 15일 (수) 오후 5:24, Vlastimil Babka 님이 작성:
>> >> > /*
>> >> > * get_page_from_freelist goes
On 7/16/20 6:51 PM, Muchun Song wrote:
> If the kmem_cache refcount is greater than one, we should not
> mark the root kmem_cache as dying. If we mark the root kmem_cache
> dying incorrectly, the non-root kmem_cache can never be destroyed.
> It resulted in memory leak when memcg was destroyed. We c
On 7/16/20 9:27 AM, Joonsoo Kim wrote:
> 2020년 7월 15일 (수) 오후 5:24, Vlastimil Babka 님이 작성:
>> > /*
>> > * get_page_from_freelist goes through the zonelist trying to allocate
>> > * a page.
>> > @@ -3706,6 +3714,8 @@ get_page_from_freelist(gfp_t
On 7/15/20 5:13 PM, Muchun Song wrote:
> On Wed, Jul 15, 2020 at 7:32 PM Vlastimil Babka wrote:
>>
>> On 7/7/20 8:27 AM, Muchun Song wrote:
>> > If the kmem_cache refcount is greater than one, we should not
>> > mark the root kmem_cache as dying. If we ma
On 7/7/20 8:27 AM, Muchun Song wrote:
> If the kmem_cache refcount is greater than one, we should not
> mark the root kmem_cache as dying. If we mark the root kmem_cache
> dying incorrectly, the non-root kmem_cache can never be destroyed.
> It resulted in memory leak when memcg was destroyed. We ca
; but will not generate bogus warnings.
>
> Signed-off-by: Roman Gushchin
> Cc: Hugh Dickins
> Signed-off-by: Roman Gushchin
Acked-by: Vlastimil Babka
ueue cannot be utilized.
>
> This patch tries to fix this situation by making the deque function on
> hugetlb CMA aware. In the deque function, CMA memory is skipped if
> PF_MEMALLOC_NOCMA flag is found.
>
> Acked-by: Mike Kravetz
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
On 7/15/20 7:05 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> Currently, preventing cma area in page allocation is implemented by using
> current_gfp_context(). However, there are two problems of this
> implementation.
>
> First, this doesn't work for allocation fastpath. In the fastpath,
On 7/14/20 11:57 AM, Wei Yang wrote:
> On Tue, Jul 14, 2020 at 11:22:03AM +0200, Vlastimil Babka wrote:
>>On 7/14/20 11:13 AM, Vlastimil Babka wrote:
>>> On 7/14/20 9:34 AM, Wei Yang wrote:
>>>> The second parameter of for_each_node_mask_to_[alloc|free] is a loop
&
On 7/13/20 3:57 AM, Robbie Ko wrote:
>
> Vlastimil Babka 於 2020/7/10 下午11:31 寫道:
>> On 7/9/20 4:48 AM, robbieko wrote:
>>> From: Robbie Ko
>>>
>>> When a migrate page occurs, we first create a migration entry
>>> to replace the original pte, and t
On 7/13/20 6:43 PM, Alexander A. Klimov wrote:
> Rationale:
> Reduces attack surface on kernel devs opening the links for MITM
> as HTTPS traffic is much harder to manipulate.
>
> Deterministic algorithm:
> For each file:
> If not .svg:
> For each line:
> If doesn't contain `\bxmlns\b`
On 7/14/20 11:13 AM, Vlastimil Babka wrote:
> On 7/14/20 9:34 AM, Wei Yang wrote:
>> The second parameter of for_each_node_mask_to_[alloc|free] is a loop
>> variant, which is not used outside of loop iteration.
>>
>> Let's hide this.
>>
>> Signed-of
On 7/14/20 9:34 AM, Wei Yang wrote:
> The second parameter of for_each_node_mask_to_[alloc|free] is a loop
> variant, which is not used outside of loop iteration.
>
> Let's hide this.
>
> Signed-off-by: Wei Yang
> ---
> mm/hugetlb.c | 38 --
> 1 file changed,
On 7/13/20 8:41 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
Nit: s/make/introduce/ in the subject, is a more common verb in this context.
n seen during
> large mmaps initialization. There is no indication that this is a
> problem for migration as well but theoretically the same might happen
> when migrating large mappings to a different node. Make the migration
> callback consistent with regular THP allocations.
>
> Signed-of
On 7/9/20 4:48 AM, robbieko wrote:
> From: Robbie Ko
>
> When a migrate page occurs, we first create a migration entry
> to replace the original pte, and then go to fallback_migrate_page
> to execute a writeout if the migratepage is not supported.
>
> In the writeout, we will clear the dirty bit
ading mmap_lock in __do_munmap() if detached
> VMAs are next to VM_GROWSDOWN or VM_GROWSUP VMA.
>
> Signed-off-by: Kirill A. Shutemov
> Reported-by: Jann Horn
> Fixes: dd2283f2605e ("mm: mmap: zap pages with read mmap_sem in munmap")
> Cc: # 4.20
> Cc: Yang Shi
> C
gt;
> Signed-off-by: Roman Gushchin
Acked-by: Vlastimil Babka
> ---
> mm/slab.c | 4 ++--
> mm/slab.h | 8
> mm/slub.c | 4 ++--
> 3 files changed, 8 insertions(+), 8 deletions(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index fafd46877504..300adfb6
significantly exceeds the
> cost of a jump. However, the conversion makes the code look more
> logically.
>
> Signed-off-by: Roman Gushchin
Acked-by: Vlastimil Babka
> ---
> include/linux/memcontrol.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --
On 7/7/20 7:36 PM, Roman Gushchin wrote:
> charge_slab_page() is not using the gfp argument anymore,
> remove it.
>
> Signed-off-by: Roman Gushchin
Acked-by: Vlastimil Babka
> ---
> mm/slab.c | 2 +-
> mm/slab.h | 3 +--
> mm/slub.c | 2 +-
> 3 files changed, 3
On 7/8/20 9:41 AM, Michal Hocko wrote:
> On Wed 08-07-20 16:16:02, Joonsoo Kim wrote:
>> On Tue, Jul 07, 2020 at 01:22:31PM +0200, Vlastimil Babka wrote:
>>
>> Simply, I call memalloc_nocma_{save,restore} in new_non_cma_page(). It
>> would not cause any problem.
>
&
On 7/7/20 9:44 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> There is a well-defined standard migration target callback. Use it
> directly.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
> ---
> mm/memory-failure.c | 18 ++
> 1
>
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
Thanks! Nitpick below.
> @@ -1345,9 +1324,28 @@ do_migrate_range(unsigned long start_pfn, unsigned
> long end_pfn)
> put_page(page);
> }
> if (!list_empty(&source)) {
> - /* Al
On 7/7/20 9:44 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> There are some similar functions for migration target allocation. Since
> there is no fundamental difference, it's better to keep just one rather
> than keeping all variants. This patch implements base migration target
> allocat
On 7/7/20 1:48 PM, Michal Hocko wrote:
> On Tue 07-07-20 16:44:48, Joonsoo Kim wrote:
>> From: Joonsoo Kim
>>
>> There is a well-defined standard migration target callback. Use it
>> directly.
>>
>> Signed-off-by: Joonsoo Kim
>> ---
>> mm/memory-failure.c | 18 ++
>> 1 file cha
On 7/7/20 9:44 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> In mm/migrate.c, THP allocation for migration is called with the provided
> gfp_mask | GFP_TRANSHUGE. This gfp_mask contains __GFP_RECLAIM and it
> would be conflict with the intention of the GFP_TRANSHUGE.
>
> GFP_TRANSHUGE/GFP_
On 7/7/20 9:44 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> new_non_cma_page() in gup.c which try to allocate migration target page
> requires to allocate the new page that is not on the CMA area.
> new_non_cma_page() implements it by removing __GFP_MOVABLE flag. This way
> works well for
age_nodemask() are changed
> to provide gfp_mask.
>
> Note that it's safe to remove a node id check in alloc_huge_page_node()
> since there is no caller passing NUMA_NO_NODE as a node id.
>
> Reviewed-by: Mike Kravetz
> Signed-off-by: Joonsoo Kim
Yeah, this version looks very good :)
Reviewed-by: Vlastimil Babka
Thanks!
On 6/23/20 8:13 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> There is a well-defined standard migration target callback.
> Use it directly.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
But you could move this to patch 5/8 to reduce churn. And do the s
On 6/23/20 8:13 AM, js1...@gmail.com wrote:
> From: Joonsoo Kim
>
> There is a well-defined migration target allocation callback.
> Use it.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
I like that this removes the wrapper completely.
ion target
> allocation callback and use it on gup.c.
>
> Signed-off-by: Joonsoo Kim
Acked-by: Vlastimil Babka
But a suggestion below.
> ---
> mm/gup.c | 57 -
> mm/internal.h | 1 +
> mm/migrate.c | 4 +
Joonsoo Kim
Provided that the "&= ~__GFP_RECLAIM" line is separated patch as you discussed,
Acked-by: Vlastimil Babka
memcg_kmem_enabled() irreversible (always returning true
>> after returning it for the first time), it'll make the general logic
>> more simple and robust. It also will allow to guard some checks which
>> otherwise would stay unguarded.
>>
>> Signed-off-by: Roman Gus
On 6/26/20 6:02 AM, Joonsoo Kim wrote:
> 2020년 6월 25일 (목) 오후 8:26, Michal Hocko 님이 작성:
>>
>> On Tue 23-06-20 15:13:43, Joonsoo Kim wrote:
>> > From: Joonsoo Kim
>> >
>> > There is no difference between two migration callback functions,
>> > alloc_huge_page_node() and alloc_huge_page_nodemask(), ex
401 - 500 of 1826 matches
Mail list logo