Hi Marek,
On Mon, May 11, 2020 at 08:36:41AM +0200, Marek Szyprowski wrote:
> Hi Mike,
>
> On 08.05.2020 19:42, Mike Rapoport wrote:
> > On Fri, May 08, 2020 at 08:53:27AM +0200, Marek Szyprowski wrote:
> >> On 07.05.2020 18:11, Mike Rapoport wrote:
> >>> On T
Cc: Palmer Dabbelt
> Cc: Heiko Carstens
> Cc: Vasily Gorbik
> Cc: Christian Borntraeger
> Cc: Yoshinori Sato
> Cc: Rich Felker
> Cc: "David S. Miller"
> Cc: Thomas Gleixner
> Cc: Ingo Molnar
> Cc: Borislav Petkov
> Cc: "H. Peter Anvin"
> C
On Fri, May 08, 2020 at 08:53:27AM +0200, Marek Szyprowski wrote:
> Hi Mike,
>
> On 07.05.2020 18:11, Mike Rapoport wrote:
> > On Thu, May 07, 2020 at 02:16:56PM +0200, Marek Szyprowski wrote:
> >> On 14.04.2020 17:34, Mike Rapoport wrote:
> >>> From:
Hi,
On Thu, May 07, 2020 at 02:16:56PM +0200, Marek Szyprowski wrote:
> Hi
>
> On 14.04.2020 17:34, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > Implement primitives necessary for the 4th level folding, add walks of p4d
> > level where appropriate,
On Tue, May 05, 2020 at 06:18:11AM -0700, Guenter Roeck wrote:
> On 5/4/20 8:39 AM, Mike Rapoport wrote:
> > On Sun, May 03, 2020 at 11:43:00AM -0700, Guenter Roeck wrote:
> >> On Sun, May 03, 2020 at 10:41:38AM -0700, Guenter Roeck wrote:
> >>> Hi,
> >>>
On Sun, May 03, 2020 at 11:43:00AM -0700, Guenter Roeck wrote:
> On Sun, May 03, 2020 at 10:41:38AM -0700, Guenter Roeck wrote:
> > Hi,
> >
> > On Wed, Apr 29, 2020 at 03:11:23PM +0300, Mike Rapoport wrote:
> > > From: Mike Rapoport
> > >
&
On Wed, Apr 29, 2020 at 03:11:22PM +0300, Mike Rapoport wrote:
> From: Mike Rapoport
>
> The commit f47ac088c406 ("mm: memmap_init: iterate over memblock regions
> rather that check each PFN") made early_pfn_in_nid() obsolete and since
> CONFIG_NODES_SPAN_OTHER_NODES is
On Wed, Apr 29, 2020 at 07:17:06AM -0700, Christoph Hellwig wrote:
> On Wed, Apr 29, 2020 at 03:11:22PM +0300, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > The commit f47ac088c406 ("mm: memmap_init: iterate over memblock regions
> > rather that check
From: Mike Rapoport
to reflect the updates to free_area_init() family of functions.
Signed-off-by: Mike Rapoport
---
Documentation/vm/memory-model.rst | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/Documentation/vm/memory-model.rst
b/Documentation/vm/memory
From: Mike Rapoport
The find_min_pfn_with_active_regions() calls find_min_pfn_for_node() with
nid parameter set to MAX_NUMNODES. This makes the find_min_pfn_for_node()
traverse all memblock memory regions although the first PFN in the system
can be easily found with memblock_start_of_DRAM
From: Mike Rapoport
The free_area_init_node() now always uses memblock info and the zone PFN
limits so it does not need the backwards compatibility functions to
calculate the zone spanned and absent pages. The removal of the compat_
versions of zone_{abscent,spanned}_pages_in_node() in turn
From: Mike Rapoport
Some architectures (e.g. ARC) have the ZONE_HIGHMEM zone below the
ZONE_NORMAL. Allowing free_area_init() parse max_zone_pfn array even it is
sorted in descending order allows using free_area_init() on such
architectures.
Add top -> down traversal of max_zone_pfn ar
From: Mike Rapoport
The commit f47ac088c406 ("mm: memmap_init: iterate over memblock regions
rather that check each PFN") made early_pfn_in_nid() obsolete and since
CONFIG_NODES_SPAN_OTHER_NODES is only used to pick a stub or a real
implementation of early_pfn_in_nid() it is also
-by: Baoquan He
Signed-off-by: Mike Rapoport
---
mm/page_alloc.c | 47 ---
1 file changed, 16 insertions(+), 31 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7f6a3081edb8..8d112defaead 100644
--- a/mm/page_alloc.c
+++ b/mm
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() has effectively became a wrapper for
free_area_init_nodes() and there is no point of keeping it. Still
free_area_init() name is shorter and more general as it does not imply
necessity to initialize multiple nodes.
Rename free_area_init_nodes
From: Mike Rapoport
Currently, architectures that use free_area_init() to initialize memory map
and node and zone structures need to calculate zone and hole sizes. We can
use free_area_init_nodes() instead and let it detect the zone boundaries
while the architectures will only have to supply
From: Mike Rapoport
The CONFIG_HAVE_MEMBLOCK_NODE_MAP is used to differentiate initialization
of nodes and zones structures between the systems that have region to node
mapping in memblock and those that don't.
Currently all the NUMA architectures enable this option and for the
non-NUMA systems
From: Mike Rapoport
The early_pfn_to_nid() and it's helper __early_pfn_to_nid() are spread
around include/linux/mm.h, include/linux/mmzone.h and mm/page_alloc.c.
Drop unused stub for __early_pfn_to_nid() and move its actual generic
implementation close to its users.
Signed-off-by: Mike
From: Mike Rapoport
There are several places in the code that directly dereference
memblock_region.nid despite this field being defined only when
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y.
Replace these with calls to memblock_get_region_nid() to improve code
robustness and to avoid possible breakage when
From: Mike Rapoport
Hi,
After the discussion [1] about removal of CONFIG_NODES_SPAN_OTHER_NODES and
CONFIG_HAVE_MEMBLOCK_NODE_MAP options, I took it a bit further and updated
the node/zone initialization.
Since all architectures have memblock, it is possible to use only the newer
version
ed by some
architectures to set up ALL huge pages sizes.
Signed-off-by: Mike Kravetz
Acked-by: Mina Almasry
Reviewed-by: Peter Xu
Acked-by: Gerald Schaefer [s390]
Acked-by: Will Deacon
---
arch/arm64/mm/hugetlbpage.c | 15 ---
arch/powerpc/mm/hugetlbpage.c | 15 ---
processing "hugepagesz=".
After this, calls to size_to_hstate() in arch specific code can be
removed and hugetlb_add_hstate can be called without worrying about
warning messages.
Signed-off-by: Mike Kravetz
Acked-by: Mina Almasry
Acked-by: Gerald Schaefer [s390]
Acked-by: Will Deacon
Test
the bootmem allocator required
for gigantic allocations is not available at this time.
Signed-off-by: Mike Kravetz
Acked-by: Gerald Schaefer [s390]
Acked-by: Will Deacon
Tested-by: Sandipan Das
---
.../admin-guide/kernel-parameters.txt | 40 +++--
Documentation/admin-guide/mm
want to additional changes to
hugepages_supported() for x86? If that is needed I would prefer
a separate patch.)
Longpeng(Mike) reported a weird message from hugetlb command line processing
and proposed a solution [1]. While the proposed patch does address the
specific issue
of the "hugepagesz=" in arch specific code to a common
routine in arch independent code.
Signed-off-by: Mike Kravetz
Acked-by: Gerald Schaefer [s390]
Acked-by: Will Deacon
---
arch/arm64/mm/hugetlbpage.c | 17 +
arch/powerpc/mm/hugetlbpage.c | 20 +---
arc
On 4/27/20 1:18 PM, Andrew Morton wrote:
> On Mon, 27 Apr 2020 12:09:47 -0700 Mike Kravetz
> wrote:
>
>> Previously, a check for hugepages_supported was added before processing
>> hugetlb command line parameters. On some architectures such as powerpc,
>> hugep
On 4/27/20 10:25 AM, Mike Kravetz wrote:
> On 4/26/20 10:04 PM, Sandipan Das wrote:
>> On 18/04/20 12:20 am, Mike Kravetz wrote:
>>> Now that architectures provide arch_hugetlb_valid_size(), parsing
>>> of "hugepagesz=" can be done in architecture indep
On 4/26/20 10:04 PM, Sandipan Das wrote:
> Hi Mike,
>
> On 18/04/20 12:20 am, Mike Kravetz wrote:
>> Now that architectures provide arch_hugetlb_valid_size(), parsing
>> of "hugepagesz=" can be done in architecture independent code.
>> Create a single
On Fri, Apr 24, 2020 at 09:22:32AM +0200, David Hildenbrand wrote:
> On 12.04.20 21:48, Mike Rapoport wrote:
> > From: Baoquan He
> >
> > When called during boot the memmap_init_zone() function checks if each PFN
> > is valid and actually belongs to the n
On Thu, Apr 23, 2020 at 11:14:54AM +0800, Baoquan He wrote:
> On 04/12/20 at 10:48pm, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > The free_area_init_node() is only used by x86 to initialize a memory-less
> > nodes.
> > Make its name reflect this and
On Thu, Apr 23, 2020 at 10:57:20AM +0800, Baoquan He wrote:
> On 04/23/20 at 10:53am, Baoquan He wrote:
> > On 04/12/20 at 10:48pm, Mike Rapoport wrote:
> > > From: Mike Rapoport
> > >
> > > Some architectures (e.g. ARC) have the ZONE_HIGHMEM zone b
On Thu, Apr 23, 2020 at 09:13:12AM +0800, Baoquan He wrote:
> On 04/12/20 at 10:48pm, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > The commit f47ac088c406 ("mm: memmap_init: iterate over memblock regions
>
> This commit id should be a temporary
On 4/22/20 3:42 AM, Aneesh Kumar K.V wrote:
> Mike Kravetz writes:
>
>> The routine hugetlb_add_hstate prints a warning if the hstate already
>> exists. This was originally done as part of kernel command line
>> parsing. If 'hugepagesz=' was specified mor
On Tue, Apr 21, 2020 at 12:23:16PM +0800, Baoquan He wrote:
> On 04/12/20 at 10:48pm, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > The CONFIG_HAVE_MEMBLOCK_NODE_MAP is used to differentiate initialization
> > of nodes and zones structures between the syste
On Tue, Apr 21, 2020 at 10:24:35AM +0800, Baoquan He wrote:
> On 04/12/20 at 10:48pm, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > The early_pfn_to_nid() and it's helper __early_pfn_to_nid() are spread
> > around include/linux/mm.h, include/linux/m
On Tue, Apr 21, 2020 at 11:31:14AM +0800, Baoquan He wrote:
> On 04/12/20 at 10:48pm, Mike Rapoport wrote:
> > From: Mike Rapoport
> >
> > The early_pfn_to_nid() and it's helper __early_pfn_to_nid() are spread
> > around include/linux/mm.h, include/linux/m
On 4/20/20 1:29 PM, Anders Roxell wrote:
> On Mon, 20 Apr 2020 at 20:23, Mike Kravetz wrote:
>> On 4/20/20 8:34 AM, Qian Cai wrote:
>>>
>>> Reverted this series fixed many undefined behaviors on arm64 with the
>>> config,
>> While rearranging the code
On 4/20/20 8:34 AM, Qian Cai wrote:
>
>
>> On Apr 17, 2020, at 2:50 PM, Mike Kravetz wrote:
>>
>> Longpeng(Mike) reported a weird message from hugetlb command line processing
>> and proposed a solution [1]. While the proposed patch does address the
>> spe
, but it sounds like we may want to additional changes to
hugepages_supported() for x86? If that is needed I would prefer
a separate patch.)
Longpeng(Mike) reported a weird message from hugetlb command line processing
and proposed a solution [1]. While the proposed patch does address
allocator required
for gigantic allocations is not available at this time.
Signed-off-by: Mike Kravetz
---
.../admin-guide/kernel-parameters.txt | 40 +++--
Documentation/admin-guide/mm/hugetlbpage.rst | 35
mm/hugetlb.c | 159 ++
3 files
of the "hugepagesz=" in arch specific code to a common
routine in arch independent code.
Signed-off-by: Mike Kravetz
---
arch/arm64/mm/hugetlbpage.c | 17 +
arch/powerpc/mm/hugetlbpage.c | 20 +---
arch/riscv/mm/hugetlbpage.c | 26 +-
ar
processing "hugepagesz=".
After this, calls to size_to_hstate() in arch specific code can be
removed and hugetlb_add_hstate can be called without worrying about
warning messages.
Signed-off-by: Mike Kravetz
Acked-by: Mina Almasry
---
arch/arm64/mm/hugetlbpage.c | 16
arch/powe
ed by some
architectures to set up ALL huge pages sizes.
Signed-off-by: Mike Kravetz
Acked-by: Mina Almasry
Reviewed-by: Peter Xu
---
arch/arm64/mm/hugetlbpage.c | 15 ---
arch/powerpc/mm/hugetlbpage.c | 15 ---
arch/riscv/mm/hugetlbpage.c | 16
ar
From: Mike Rapoport
There are no architectures that use include/asm-generic/5level-fixup.h
therefore it can be removed along with __ARCH_HAS_5LEVEL_HACK define and
the code it surrounds
Signed-off-by: Mike Rapoport
---
include/asm-generic/5level-fixup.h | 58
From: Mike Rapoport
No architecture defines __ARCH_USE_5LEVEL_HACK and therefore
pgtable-nop4d-hack.h will be never actually included.
Remove it.
Signed-off-by: Mike Rapoport
---
include/asm-generic/pgtable-nop4d-hack.h | 64
include/asm-generic/pgtable-nopud.h
From: Mike Rapoport
The unicore32 architecture has 2 level page tables and
asm-generic/pgtable-nopmd.h and explicit casts from pud_t to pgd_t for page
table folding.
Add p4d walk in the only place that actually unfolds the pud level and
remove __ARCH_USE_5LEVEL_HACK.
Signed-off-by: Mike
From: Mike Rapoport
Implement primitives necessary for the 4th level folding, add walks of p4d
level where appropriate and remove usage of __ARCH_USE_5LEVEL_HACK.
Signed-off-by: Mike Rapoport
---
arch/sh/include/asm/pgtable-2level.h | 1 -
arch/sh/include/asm/pgtable-3level.h | 1 -
arch/sh
From: Mike Rapoport
The __pXd_offset() macros are identical to the pXd_index() macros and there
is no point to keep both of them. All architectures define and use
pXd_index() so let's keep only those to make mips consistent with the rest
of the kernel.
Signed-off-by: Mike Rapoport
---
arch/sh
From: Geert Uytterhoeven
- Convert from printk() to pr_*(),
- Add missing continuations,
- Use "%llx" to format u64,
- Join multiple prints in show_fault_oops() into a single print.
Signed-off-by: Geert Uytterhoeven
Signed-off-by: Mike Rapoport
---
arch/sh/mm/fa
From: Mike Rapoport
Implement primitives necessary for the 4th level folding, add walks of p4d
level where appropriate and replace 5level-fixup.h with pgtable-nop4d.h.
Signed-off-by: Mike Rapoport
Tested-by: Christophe Leroy # 8xx and 83xx
---
arch/powerpc/include/asm/book3s/32/pgtable.h
From: Mike Rapoport
Implement primitives necessary for the 4th level folding, add walks of p4d
level where appropriate and remove usage of __ARCH_USE_5LEVEL_HACK.
Signed-off-by: Mike Rapoport
---
arch/openrisc/include/asm/pgtable.h | 1 -
arch/openrisc/mm/fault.c| 10
From: Mike Rapoport
Implement primitives necessary for the 4th level folding, add walks of p4d
level where appropriate and remove usage of __ARCH_USE_5LEVEL_HACK.
Signed-off-by: Mike Rapoport
---
arch/nios2/include/asm/pgtable.h | 3 +--
arch/nios2/mm/fault.c| 9 +++--
arch
From: Mike Rapoport
Implement primitives necessary for the 4th level folding, add walks of p4d
level where appropriate, remove usage of __ARCH_USE_5LEVEL_HACK and replace
5level-fixup.h with pgtable-nop4d.h
Signed-off-by: Mike Rapoport
---
arch/ia64/include/asm/pgalloc.h | 4 ++--
arch/ia64
From: Mike Rapoport
The hexagon architecture has 2 level page tables and as such most of the
page table folding is already implemented in asm-generic/pgtable-nopmd.h.
Fixup the only place in arch/hexagon to unfold the p4d level and remove
__ARCH_USE_5LEVEL_HACK.
Signed-off-by: Mike Rapoport
From: Mike Rapoport
Implement primitives necessary for the 4th level folding, add walks of p4d
level where appropriate, replace 5level-fixup.h with pgtable-nop4d.h and
remove __ARCH_USE_5LEVEL_HACK.
Signed-off-by: Mike Rapoport
---
arch/arm64/include/asm/kvm_mmu.h| 10 +-
arch/arm64
From: Mike Rapoport
Implement primitives necessary for the 4th level folding, add walks of p4d
level where appropriate, and remove __ARCH_USE_5LEVEL_HACK.
Signed-off-by: Mike Rapoport
---
arch/arm/include/asm/pgtable.h | 1 -
arch/arm/lib/uaccess_with_memcpy.c | 7 +-
arch/arm/mach
From: Mike Rapoport
h8300 is a nommu architecture and does not require fixup for upper layers
of the page tables because it is already handled by the generic nommu
implementation.
Remove definition of __ARCH_USE_5LEVEL_HACK in
arch/h8300/include/asm/pgtable.h
Signed-off-by: Mike Rapoport
From: Mike Rapoport
Hi,
These patches convert several architectures to use page table folding and
remove __ARCH_HAS_5LEVEL_HACK along with include/asm-generic/5level-fixup.h
and include/asm-generic/pgtable-nop4d-hack.h. With that we'll have a single
and consistent way of dealing with page table
On 4/10/20 1:37 PM, Peter Xu wrote:
> On Wed, Apr 01, 2020 at 11:38:19AM -0700, Mike Kravetz wrote:
>> With all hugetlb page processing done in a single file clean up code.
>> - Make code match desired semantics
>> - Update documentation with semantics
>> - Make all w
On 4/10/20 12:16 PM, Peter Xu wrote:
> On Wed, Apr 01, 2020 at 11:38:16AM -0700, Mike Kravetz wrote:
>> diff --git a/arch/arm64/include/asm/hugetlb.h
>> b/arch/arm64/include/asm/hugetlb.h
>> index 2eb6c234d594..81606223494f 100644
>> --- a/arch/arm64/include/asm/hu
From: Mike Rapoport
to reflect the updates to free_area_init() family of functions.
Signed-off-by: Mike Rapoport
---
Documentation/vm/memory-model.rst | 9 -
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/Documentation/vm/memory-model.rst
b/Documentation/vm/memory
From: Mike Rapoport
The find_min_pfn_with_active_regions() calls find_min_pfn_for_node() with
nid parameter set to MAX_NUMNODES. This makes the find_min_pfn_for_node()
traverse all memblock memory regions although the first PFN in the system
can be easily found with memblock_start_of_DRAM
From: Mike Rapoport
The free_area_init_node() now always uses memblock info and the zone PFN
limits so it does not need the backwards compatibility functions to
calculate the zone spanned and absent pages. The removal of the compat_
versions of zone_{abscent,spanned}_pages_in_node() in turn
From: Mike Rapoport
The free_area_init_node() is only used by x86 to initialize a memory-less
nodes.
Make its name reflect this and drop all the function parameters except node
ID as they are anyway zero.
Signed-off-by: Mike Rapoport
---
arch/x86/mm/numa.c | 5 +
include/linux/mm.h | 9
From: Mike Rapoport
Some architectures (e.g. ARC) have the ZONE_HIGHMEM zone below the
ZONE_NORMAL. Allowing free_area_init() parse max_zone_pfn array even it is
sorted in descending order allows using free_area_init() on such
architectures.
Add top -> down traversal of max_zone_pfn ar
From: Mike Rapoport
The commit f47ac088c406 ("mm: memmap_init: iterate over memblock regions
rather that check each PFN") made early_pfn_in_nid() obsolete and since
CONFIG_NODES_SPAN_OTHER_NODES is only used to pick a stub or a real
implementation of early_pfn_in_nid() it is also
-by: Baoquan He
Signed-off-by: Mike Rapoport
---
mm/page_alloc.c | 26 --
1 file changed, 16 insertions(+), 10 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 7f6a3081edb8..c43ce8709457 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -5995,14
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() function only requires the definition of maximal PFN
for each of the supported zone rater than calculation of actual zone sizes
and the sizes of the holes between the zones.
After removal of CONFIG_HAVE_MEMBLOCK_NODE_MAP the free_area_init() is
available
From: Mike Rapoport
The free_area_init() has effectively became a wrapper for
free_area_init_nodes() and there is no point of keeping it. Still
free_area_init() name is shorter and more general as it does not imply
necessity to initialize multiple nodes.
Rename free_area_init_nodes
From: Mike Rapoport
Currently, architectures that use free_area_init() to initialize memory map
and node and zone structures need to calculate zone and hole sizes. We can
use free_area_init_nodes() instead and let it detect the zone boundaries
while the architectures will only have to supply
From: Mike Rapoport
The CONFIG_HAVE_MEMBLOCK_NODE_MAP is used to differentiate initialization
of nodes and zones structures between the systems that have region to node
mapping in memblock and those that don't.
Currently all the NUMA architectures enable this option and for the
non-NUMA systems
From: Mike Rapoport
The early_pfn_to_nid() and it's helper __early_pfn_to_nid() are spread
around include/linux/mm.h, include/linux/mmzone.h and mm/page_alloc.c.
Drop unused stub for __early_pfn_to_nid() and move its actual generic
implementation close to its users.
Signed-off-by: Mike
From: Mike Rapoport
There are several places in the code that directly dereference
memblock_region.nid despite this field being defined only when
CONFIG_HAVE_MEMBLOCK_NODE_MAP=y.
Replace these with calls to memblock_get_region_nid() to improve code
robustness and to avoid possible breakage when
From: Mike Rapoport
Hi,
After the discussion [1] about removal of CONFIG_NODES_SPAN_OTHER_NODES and
CONFIG_HAVE_MEMBLOCK_NODE_MAP options, I took it a bit further and updated
the node/zone initialization.
Since all architectures have memblock, it is possible to use only the newer
version
On Tue, Mar 31, 2020 at 04:21:38PM +0200, Michal Hocko wrote:
> On Tue 31-03-20 22:03:32, Baoquan He wrote:
> > Hi Michal,
> >
> > On 03/31/20 at 10:55am, Michal Hocko wrote:
> > > On Tue 31-03-20 11:14:23, Mike Rapoport wrote:
> > > > Maybe I mis-rea
, but it sounds like we may want to additional changes to
hugepages_supported() for x86? If that is needed I would prefer
a separate patch.)
Longpeng(Mike) reported a weird message from hugetlb command line processing
and proposed a solution [1]. While the proposed patch does address
of the "hugepagesz=" in arch specific code to a common
routine in arch independent code.
Signed-off-by: Mike Kravetz
---
arch/arm64/include/asm/hugetlb.h | 2 ++
arch/arm64/mm/hugetlbpage.c| 17 +
arch/powerpc/include/asm/hugetlb.h | 3 +++
arch/powerpc/mm/hugetlbpage.c
processing "hugepagesz=".
After this, calls to size_to_hstate() in arch specific code can be
removed and hugetlb_add_hstate can be called without worrying about
warning messages.
Signed-off-by: Mike Kravetz
---
arch/arm64/mm/hugetlbpage.c | 16
arch/powerpc/mm/hugetlbpage.c |
() before processing parameters.
- Add comments to code
- Describe some of the subtle interactions
- Describe semantics of command line arguments
Signed-off-by: Mike Kravetz
---
.../admin-guide/kernel-parameters.txt | 35 ---
Documentation/admin-guide/mm/hugetlbpage.rst | 44
ed by some
architectures to set up ALL huge pages sizes.
Signed-off-by: Mike Kravetz
---
arch/arm64/mm/hugetlbpage.c | 15 ---
arch/powerpc/mm/hugetlbpage.c | 15 ---
arch/riscv/mm/hugetlbpage.c | 16
arch/s390/mm/hugetlbpage.c| 18 --
Hi,
On Wed, Apr 01, 2020 at 01:42:27PM +0800, Baoquan He wrote:
> On 04/01/20 at 12:56am, Mike Rapoport wrote:
> > On Mon, Mar 30, 2020 at 11:58:43AM +0200, Michal Hocko wrote:
> > >
> > > What would it take to make ia64 use HAVE_MEMBLOCK_NODE_MAP? I would
> > &
assumptions, it's possible to make selection of the functions
that calculate spanned and absent pages at runtime.
This patch builds for arm and x86-64 and boots on qemu-system for both.
>From f907df987db4d6735c4940b30cfb4764fc0007d4 Mon Sep 17 00:00:00 2001
From: Mike Rapoport
Date: Wed, 1 Ap
On Mon, Mar 30, 2020 at 08:23:01PM +0200, Michal Hocko wrote:
> On Mon 30-03-20 20:51:00, Mike Rapoport wrote:
> > On Mon, Mar 30, 2020 at 09:42:46AM +0200, Michal Hocko wrote:
> > > On Sat 28-03-20 11:31:17, Hoan Tran wrote:
> > > > In NUMA layout which nodes have
701 - 800 of 1497 matches
Mail list logo