for the creation of a special type of
writable memory (shadow stack) that is only writable in limited specific
ways. Previously, changes were proposed to core MM code to teach it to
decide when to create normally writable memory or the special shadow stack
writable memory, but David Hildenbrand suggested
On 01.03.23 08:03, Christophe Leroy wrote:
Le 27/02/2023 à 23:29, Rick Edgecombe a écrit :
The x86 Control-flow Enforcement Technology (CET) feature includes a new
type of memory called shadow stack. This shadow stack memory has some
unusual properties, which requires some core mm changes to
On 28.02.23 16:50, Palmer Dabbelt wrote:
On Fri, 13 Jan 2023 09:10:19 PST (-0800), da...@redhat.com wrote:
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit
from the offset. This reduces the maximum swap space per file: on 32bit
to 16 GiB (was 32 GiB).
Seems fine to me, I doubt
On 27.02.23 20:46, Geert Uytterhoeven wrote:
Hi David,
On Mon, Feb 27, 2023 at 6:01 PM David Hildenbrand wrote:
/*
* Externally used page protection values.
diff --git a/arch/microblaze/include/asm/pgtable.h
b/arch/microblaze/include/asm/pgtable.h
index 42f5988e998b..7e3de54bf426
/*
* Externally used page protection values.
diff --git a/arch/microblaze/include/asm/pgtable.h
b/arch/microblaze/include/asm/pgtable.h
index 42f5988e998b..7e3de54bf426 100644
--- a/arch/microblaze/include/asm/pgtable.h
+++ b/arch/microblaze/include/asm/pgtable.h
@@ -131,10 +131,10 @@
On 26.02.23 21:13, Geert Uytterhoeven wrote:
Hi David,
Hi Geert,
On Fri, Jan 13, 2023 at 6:16 PM David Hildenbrand wrote:
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit
from the type. Generic MM currently only uses 5 bits for the type
(MAX_SWAPFILES_SHIFT), so the stolen
...@vger.kernel.org
Cc: linux...@vger.kernel.org
Cc: sparcli...@vger.kernel.org
Cc: linux...@lists.infradead.org
Cc: xen-de...@lists.xenproject.org
Cc: linux-a...@vger.kernel.org
Cc: linux...@kvack.org
Tested-by: Pengfei Xu
Suggested-by: David Hildenbrand
Signed-off-by: Rick Edgecombe
---
Hi Non-x86 Arch’s
On 07.02.23 01:32, Mark Brown wrote:
On Fri, Jan 13, 2023 at 06:10:04PM +0100, David Hildenbrand wrote:
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit from the
offset. This reduces the maximum swap space per file to 64 GiB (was 128
GiB).
While at it drop the PTE_TYPE_FAULT
off-by: Mike Rapoport (IBM)
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
ementation of pfn_valid() and drop its per-architecture definitions.
Signed-off-by: Mike Rapoport (IBM)
Acked-by: Arnd Bergmann
Acked-by: Guo Ren # csky
Acked-by: Huacai Chen # LoongArch
Acked-by: Stafford Horne# OpenRISC
---
LGTM with the fixup
Reviewed-by: David H
h and
drop redundant definitions.
Signed-off-by: Mike Rapoport (IBM)
Reviewed-by: Geert Uytterhoeven
Acked-by: Geert Uytterhoeven
---
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
___
linux-snps-arc mailing list
linux-snps-arc@lists.inf
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
On 13.01.23 18:10, David Hildenbrand wrote:
We want to implement __HAVE_ARCH_PTE_SWP_EXCLUSIVE on all architectures.
Let's extend our sanity checks, especially testing that our PTE bit
does not affect:
* is_swap_pte() -> pte_present() and pte_none()
* the swap entry + type
* pte_swp_soft_di
(), pte_none() and HW happy. For now, let's keep it simple
because there might be something non-obvious.
Cc: Guo Ren
Signed-off-by: David Hildenbrand
---
arch/csky/abiv1/inc/abi/pgtable-bits.h | 13 +
arch/csky/abiv2/inc/abi/pgtable-bits.h | 19 ---
arch/csky/include/asm
ot
be used, and reusing it avoids having to steal one bit from the swap
offset.
While at it, mask the type in __swp_entry().
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Christophe Leroy
Signed-off-by: David Hildenbrand
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 38 +-
at it, mask the type in __swp_entry(); use some helper definitions
to make the macros easier to grasp.
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: "H. Peter Anvin"
Signed-off-by: David Hildenbrand
---
arch/x86/include/asm/pgtable-2le
__HAVE_ARCH_PTE_SWP_EXCLUSIVE is now supported by all architectures that
support swp PTEs, so let's drop it.
Signed-off-by: David Hildenbrand
---
arch/alpha/include/asm/pgtable.h | 1 -
arch/arc/include/asm/pgtable-bits-arcv2.h| 1 -
arch/arm/include/asm/pgtable.h
and 1110 now identify swap PTEs.
While at it, remove SWP_TYPE_BITS (not really helpful as it's not used in
the actual swap macros) and mask the type in __swp_entry().
Cc: Chris Zankel
Cc: Max Filippov
Signed-off-by: David Hildenbrand
---
arch/xtensa/include/asm/pgtable.h | 32
the type in __swp_entry().
Cc: Richard Weinberger
Cc: Anton Ivanov
Cc: Johannes Berg
Signed-off-by: David Hildenbrand
---
arch/um/include/asm/pgtable.h | 37 +--
1 file changed, 35 insertions(+), 2 deletions(-)
diff --git a/arch/um/include/asm/pgtable.h b/arch/um
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit
from the type. Generic MM currently only uses 5 bits for the type
(MAX_SWAPFILES_SHIFT), so the stolen bit was effectively unused.
While at it, mask the type in __swp_entry().
Cc: "David S. Miller"
Signed-off-by: David H
. Note that the old documentation was
wrong: we use 20 bit for the offset and the reserved bits were 8 instead
of 7 bits in the ascii art.
Cc: "David S. Miller"
Signed-off-by: David Hildenbrand
---
arch/sparc/include/asm/pgtable_32.h | 27 ++-
arch/sparc/i
at it, mask the type in __swp_entry().
Cc: Yoshinori Sato
Cc: Rich Felker
Signed-off-by: David Hildenbrand
---
arch/sh/include/asm/pgtable_32.h | 54 +---
1 file changed, 42 insertions(+), 12 deletions(-)
diff --git a/arch/sh/include/asm/pgtable_32.h b/arch/sh/include
in __swp_entry().
Cc: Paul Walmsley
Cc: Palmer Dabbelt
Cc: Albert Ou
Signed-off-by: David Hildenbrand
---
arch/riscv/include/asm/pgtable-bits.h | 3 +++
arch/riscv/include/asm/pgtable.h | 29 ++-
2 files changed, 27 insertions(+), 5 deletions(-)
diff --git a/arch/riscv
-by: David Hildenbrand
---
arch/powerpc/include/asm/nohash/32/pgtable.h | 22 +
arch/powerpc/include/asm/nohash/32/pte-40x.h | 6 ++---
arch/powerpc/include/asm/nohash/32/pte-44x.h | 18 --
arch/powerpc/include/asm/nohash/32/pte-85xx.h | 4 ++--
arch/powerpc/include/asm
having to steal one bit from the swap offset.
Cc: "James E.J. Bottomley"
Cc: Helge Deller
Signed-off-by: David Hildenbrand
---
arch/parisc/include/asm/pgtable.h | 41 ---
1 file changed, 38 insertions(+), 3 deletions(-)
diff --git a/arch/parisc/include/asm
-by: David Hildenbrand
---
arch/openrisc/include/asm/pgtable.h | 41 +
1 file changed, 36 insertions(+), 5 deletions(-)
diff --git a/arch/openrisc/include/asm/pgtable.h
b/arch/openrisc/include/asm/pgtable.h
index 6477c17b3062..903b32d662ab 100644
--- a/arch/openrisc
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by using the yet-unused bit
31.
Cc: Thomas Bogendoerfer
Signed-off-by: David Hildenbrand
---
arch/nios2/include/asm/pgtable-bits.h | 3 +++
arch/nios2/include/asm/pgtable.h | 22 +-
2 files changed, 24 insertions(+), 1
for the swap type and document the layout.
Bits 26--31 should get ignored by hardware completely, so they can be
used.
Cc: Dinh Nguyen
Signed-off-by: David Hildenbrand
---
arch/nios2/include/asm/pgtable.h | 18 ++
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/arch/nios2
, document it a bit better.
While at it, mask the type in __swp_entry()/mk_swap_pte().
Cc: Thomas Bogendoerfer
Signed-off-by: David Hildenbrand
---
arch/mips/include/asm/pgtable-32.h | 88 ++
arch/mips/include/asm/pgtable-64.h | 23 ++--
arch/mips/include/asm/pgtable.h
a little bit harder to decipher.
While at it, drop the comment from paulus---copy-and-paste leftover
from powerpc where we actually have _PAGE_HASHPTE---and mask the type in
__swp_entry_to_pte() as well.
Cc: Michal Simek
Signed-off-by: David Hildenbrand
---
arch/m68k/include/asm/mcf_pgtable.h | 4
the type in __swp_entry().
Cc: Geert Uytterhoeven
Cc: Greg Ungerer
Signed-off-by: David Hildenbrand
---
arch/m68k/include/asm/mcf_pgtable.h | 36 --
arch/m68k/include/asm/motorola_pgtable.h | 38 +--
arch/m68k/include/asm/sun3_pgtable.h | 39
The definitions are not required, let's remove them.
Cc: Geert Uytterhoeven
Cc: Greg Ungerer
Signed-off-by: David Hildenbrand
---
arch/m68k/include/asm/pgtable_no.h | 6 --
1 file changed, 6 deletions(-)
diff --git a/arch/m68k/include/asm/pgtable_no.h
b/arch/m68k/include/asm
and could also be used
in swap PMD context later.
Cc: Huacai Chen
Cc: WANG Xuerui
Signed-off-by: David Hildenbrand
---
arch/loongarch/include/asm/pgtable-bits.h | 4 +++
arch/loongarch/include/asm/pgtable.h | 39 ---
2 files changed, 39 insertions(+), 4 deletions
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit
from the type. Generic MM currently only uses 5 bits for the type
(MAX_SWAPFILES_SHIFT), so the stolen bit is effectively unused.
While at it, also mask the type in __swp_entry().
Signed-off-by: David Hildenbrand
---
arch/ia64
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit from the
offset. This reduces the maximum swap space per file to 16 GiB (was 32
GiB).
While at it, mask the type in __swp_entry().
Cc: Brian Cain
Signed-off-by: David Hildenbrand
---
arch/hexagon/include/asm/pgtable.h | 37
with "Linux PTEs" not "hardware PTEs". Also, properly mask the type in
__swp_entry().
Cc: Russell King
Signed-off-by: David Hildenbrand
---
arch/arm/include/asm/pgtable-2level.h | 3 +++
arch/arm/include/asm/pgtable-3level.h | 3 +++
arch/arm/include/asm/p
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by using bit 5, which is yet
unused. The only important parts seems to be to not use _PAGE_PRESENT
(bit 9).
Cc: Vineet Gupta
Signed-off-by: David Hildenbrand
---
arch/arc/include/asm/pgtable-bits-arcv2.h | 27 ---
1 file changed
: Matt Turner
Signed-off-by: David Hildenbrand
---
arch/alpha/include/asm/pgtable.h | 41
1 file changed, 37 insertions(+), 4 deletions(-)
diff --git a/arch/alpha/include/asm/pgtable.h b/arch/alpha/include/asm/pgtable.h
index 9e45f6735d5d..970abf511b13 100644
hen the swap PTE layout differs
heavily from ordinary PTEs. Let's properly construct a swap PTE from
swap type+offset.
Signed-off-by: David Hildenbrand
---
mm/debug_vm_pgtable.c | 23 ++-
1 file changed, 22 insertions(+), 1 deletion(-)
diff --git a/mm/debug_vm_pgtable.c b
owerpc/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE on 32bit book3s"
-> Fixup swap PTE description
David Hildenbrand (26):
mm/debug_vm_pgtable: more pte_swp_exclusive() sanity checks
alpha/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE
arc/mm: support __HAVE_ARCH_PTE_SWP_E
n't call it pte_mksoft_clean().
Grepping for "pte_swp.*soft_dirty" gives you the full picture.
Thanks!
David
Huacai
On Tue, Dec 6, 2022 at 10:48 PM David Hildenbrand wrote:
This is the follow-up on [1]:
[PATCH v2 0/8] mm: COW fixes part 3: reliable GUP R/W FOLL_GET
On 06.12.22 15:47, David Hildenbrand wrote:
This is the follow-up on [1]:
[PATCH v2 0/8] mm: COW fixes part 3: reliable GUP R/W FOLL_GET of
anonymous pages
After we implemented __HAVE_ARCH_PTE_SWP_EXCLUSIVE on most prominent
enterprise architectures, implement
On 08.12.22 09:52, David Hildenbrand wrote:
On 07.12.22 14:55, Christophe Leroy wrote:
Le 06/12/2022 à 15:47, David Hildenbrand a écrit :
We already implemented support for 64bit book3s in commit bff9beaa2e80
("powerpc/pgtable: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE for book3s&quo
On 07.12.22 14:55, Christophe Leroy wrote:
Le 06/12/2022 à 15:47, David Hildenbrand a écrit :
We already implemented support for 64bit book3s in commit bff9beaa2e80
("powerpc/pgtable: support __HAVE_ARCH_PTE_SWP_EXCLUSIVE for book3s")
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE als
Supported by all architectures that support swp PTEs, so let's drop it.
Signed-off-by: David Hildenbrand
---
arch/alpha/include/asm/pgtable.h | 1 -
arch/arc/include/asm/pgtable-bits-arcv2.h| 1 -
arch/arm/include/asm/pgtable.h | 1 -
arch/arm64/include/asm
at it, mask the type in __swp_entry(); use some helper definitions
to make the macros easier to grasp.
Cc: Thomas Gleixner
Cc: Ingo Molnar
Cc: Borislav Petkov
Cc: Dave Hansen
Cc: "H. Peter Anvin"
Signed-off-by: David Hildenbrand
---
arch/x86/include/asm/pgtable-2le
and 1110 now identify swap PTEs.
While at it, remove SWP_TYPE_BITS (not really helpful as it's not used in
the actual swap macros) and mask the type in __swp_entry().
Cc: Chris Zankel
Cc: Max Filippov
Signed-off-by: David Hildenbrand
---
arch/xtensa/include/asm/pgtable.h | 32
the type in __swp_entry().
Cc: Richard Weinberger
Cc: Anton Ivanov
Cc: Johannes Berg
Signed-off-by: David Hildenbrand
---
arch/um/include/asm/pgtable.h | 37 +--
1 file changed, 35 insertions(+), 2 deletions(-)
diff --git a/arch/um/include/asm/pgtable.h b/arch/um
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit
from the type. Generic MM currently only uses 5 bits for the type
(MAX_SWAPFILES_SHIFT), so the stolen bit was effectively unused.
While at it, mask the type in __swp_entry().
Cc: "David S. Miller"
Signed-off-by: David H
. Note that the old documentation was
wrong: we use 20 bit for the offset and the reserved bits were 8 instead
of 7 bits in the ascii art.
Cc: "David S. Miller"
Signed-off-by: David Hildenbrand
---
arch/sparc/include/asm/pgtable_32.h | 27 ++-
arch/sparc/i
at it, mask the type in __swp_entry().
Cc: Yoshinori Sato
Cc: Rich Felker
Signed-off-by: David Hildenbrand
---
arch/sh/include/asm/pgtable_32.h | 54 +---
1 file changed, 42 insertions(+), 12 deletions(-)
diff --git a/arch/sh/include/asm/pgtable_32.h b/arch/sh/include
in __swp_entry().
Cc: Paul Walmsley
Cc: Palmer Dabbelt
Cc: Albert Ou
Signed-off-by: David Hildenbrand
---
arch/riscv/include/asm/pgtable-bits.h | 3 +++
arch/riscv/include/asm/pgtable.h | 29 ++-
2 files changed, 27 insertions(+), 5 deletions(-)
diff --git a/arch/riscv
-by: David Hildenbrand
---
arch/powerpc/include/asm/nohash/32/pgtable.h | 22 +
arch/powerpc/include/asm/nohash/32/pte-40x.h | 6 ++---
arch/powerpc/include/asm/nohash/32/pte-44x.h | 18 --
arch/powerpc/include/asm/nohash/32/pte-85xx.h | 4 ++--
arch/powerpc/include/asm
ot
be used, and reusing it avoids having to steal one bit from the swap
offset.
While at it, mask the type in __swp_entry().
Cc: Michael Ellerman
Cc: Nicholas Piggin
Cc: Christophe Leroy
Signed-off-by: David Hildenbrand
---
arch/powerpc/include/asm/book3s/32/pgtable.h | 38 +-
having to steal one bit from the swap offset.
Cc: "James E.J. Bottomley"
Cc: Helge Deller
Signed-off-by: David Hildenbrand
---
arch/parisc/include/asm/pgtable.h | 41 ---
1 file changed, 38 insertions(+), 3 deletions(-)
diff --git a/arch/parisc/include/asm
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by using the yet-unused bit
31.
Cc: Thomas Bogendoerfer
Signed-off-by: David Hildenbrand
---
arch/nios2/include/asm/pgtable-bits.h | 3 +++
arch/nios2/include/asm/pgtable.h | 22 +-
2 files changed, 24 insertions(+), 1
-by: David Hildenbrand
---
arch/openrisc/include/asm/pgtable.h | 41 +
1 file changed, 36 insertions(+), 5 deletions(-)
diff --git a/arch/openrisc/include/asm/pgtable.h
b/arch/openrisc/include/asm/pgtable.h
index 6477c17b3062..903b32d662ab 100644
--- a/arch/openrisc
for the swap type and document the layout.
Bits 26--31 should get ignored by hardware completely, so they can be
used.
Cc: Dinh Nguyen
Signed-off-by: David Hildenbrand
---
arch/nios2/include/asm/pgtable.h | 18 ++
1 file changed, 10 insertions(+), 8 deletions(-)
diff --git a/arch/nios2
, document it a bit better.
While at it, mask the type in __swp_entry()/mk_swap_pte().
Cc: Thomas Bogendoerfer
Signed-off-by: David Hildenbrand
---
arch/mips/include/asm/pgtable-32.h | 86 ++
arch/mips/include/asm/pgtable-64.h | 23 ++--
arch/mips/include/asm/pgtable.h
a little bit harder to decipher.
While at it, drop the comment from paulus---copy-and-paste leftover
from powerpc where we actually have _PAGE_HASHPTE---and mask the type in
__swp_entry_to_pte() as well.
Cc: Michal Simek
Signed-off-by: David Hildenbrand
---
arch/m68k/include/asm/mcf_pgtable.h | 4
the type in __swp_entry().
Cc: Geert Uytterhoeven
Cc: Greg Ungerer
Signed-off-by: David Hildenbrand
---
arch/m68k/include/asm/mcf_pgtable.h | 36 --
arch/m68k/include/asm/motorola_pgtable.h | 38 +--
arch/m68k/include/asm/sun3_pgtable.h | 39
and could also be used
in swap PMD context later.
Cc: Huacai Chen
Cc: WANG Xuerui
Signed-off-by: David Hildenbrand
---
arch/loongarch/include/asm/pgtable-bits.h | 4 +++
arch/loongarch/include/asm/pgtable.h | 39 ---
2 files changed, 39 insertions(+), 4 deletions
The definitions are not required, let's remove them.
Cc: Geert Uytterhoeven
Cc: Greg Ungerer
Signed-off-by: David Hildenbrand
---
arch/m68k/include/asm/pgtable_no.h | 6 --
1 file changed, 6 deletions(-)
diff --git a/arch/m68k/include/asm/pgtable_no.h
b/arch/m68k/include/asm
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit
from the type. Generic MM currently only uses 5 bits for the type
(MAX_SWAPFILES_SHIFT), so the stolen bit is effectively unused.
While at it, also mask the type in __swp_entry().
Signed-off-by: David Hildenbrand
---
arch/ia64
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by stealing one bit from the
offset. This reduces the maximum swap space per file to 16 GiB (was 32
GiB).
While at it, mask the type in __swp_entry().
Cc: Brian Cain
Signed-off-by: David Hildenbrand
---
arch/hexagon/include/asm/pgtable.h | 37
(), pte_none() and HW happy. For now, let's keep it simple
because there might be something non-obvious.
Cc: Guo Ren
Signed-off-by: David Hildenbrand
---
arch/csky/abiv1/inc/abi/pgtable-bits.h | 13 +
arch/csky/abiv2/inc/abi/pgtable-bits.h | 19 ---
arch/csky/include/asm
with "Linux PTEs" not "hardware PTEs". Also, properly mask the type in
__swp_entry().
Cc: Russell King
Signed-off-by: David Hildenbrand
---
arch/arm/include/asm/pgtable-2level.h | 3 +++
arch/arm/include/asm/pgtable-3level.h | 3 +++
arch/arm/include/asm/p
Let's support __HAVE_ARCH_PTE_SWP_EXCLUSIVE by using bit 5, which is yet
unused. The only important parts seems to be to not use _PAGE_PRESENT
(bit 9).
Cc: Vineet Gupta
Signed-off-by: David Hildenbrand
---
arch/arc/include/asm/pgtable-bits-arcv2.h | 27 ---
1 file changed
32bit swap entries. So the
lower 32bit are zero in a swap PTE and we could have taken a bit in
there as well.
Cc: Richard Henderson
Cc: Ivan Kokshaysky
Cc: Matt Turner
Signed-off-by: David Hildenbrand
---
arch/alpha/include/asm/pgtable.h | 41
1 file changed, 37
hen the swap PTE layout differs
heavily from ordinary PTEs. Let's properly construct a swap PTE from
swap type+offset.
Signed-off-by: David Hildenbrand
---
mm/debug_vm_pgtable.c | 23 ++-
1 file changed, 22 insertions(+), 1 deletion(-)
diff --git a/mm/debug_vm_pgtable.c b
.com/aarcange/kernel-testcases-for-v5.11/-/blob/main/page_count_do_wp_page-swap.c
[3]
https://gitlab.com/davidhildenbrand/scratchspace/-/blob/main/test_swp_exclusive.c
David Hildenbrand (26):
mm/debug_vm_pgtable: more pte_swp_exclusive() sanity checks
alpha/mm: support __HAVE_ARCH_PTE_SWP_EXCLUSI
Let's communicate driver-managed regions to memblock, to properly
teach kexec_file with CONFIG_ARCH_KEEP_MEMBLOCK to not place images on
these memory regions.
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm
ver; memory might not actually be
physically hotunpluggable. kexec *must not* indicate this memory to
the second kernel and *must not* place kexec-images on this memory.
Signed-off-by: David Hildenbrand
---
include/linux/memblock.h | 16 ++--
kernel/kexec_file.c
memblocks with wrong flags, which will be
important in a follow-up patch that introduces a new flag to properly
handle add_memory_driver_managed().
Acked-by: Geert Uytterhoeven
Acked-by: Heiko Carstens
Signed-off-by: David Hildenbrand
---
arch/arc/mm/init.c | 4 ++--
arch/ia64/mm
to
be re-armed to update the memory map for the second kernel and to place the
kexec-images somewhere else.
Signed-off-by: David Hildenbrand
---
include/linux/memblock.h | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
the error.
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 9fd0be32a281..917b3528636d 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1384,8 +1384,11
ux-s...@vger.kernel.org
Cc: linux...@kvack.org
Cc: ke...@lists.infradead.org
David Hildenbrand (5):
mm/memory_hotplug: handle memblock_add_node() failures in
add_memory_resource()
memblock: improve MEMBLOCK_HOTPLUG documentation
memblock: allow to specify flags with memblock_add_node()
memb
On 30.09.21 23:21, Mike Rapoport wrote:
On Wed, Sep 29, 2021 at 06:54:01PM +0200, David Hildenbrand wrote:
On 29.09.21 18:39, Mike Rapoport wrote:
Hi,
On Mon, Sep 27, 2021 at 05:05:17PM +0200, David Hildenbrand wrote:
Let's add a flag that corresponds to IORESOURCE_SYSRAM_DRIVER_MANAGED
On 29.09.21 18:39, Mike Rapoport wrote:
Hi,
On Mon, Sep 27, 2021 at 05:05:17PM +0200, David Hildenbrand wrote:
Let's add a flag that corresponds to IORESOURCE_SYSRAM_DRIVER_MANAGED.
Similar to MEMBLOCK_HOTPLUG, most infrastructure has to treat such memory
like ordinary MEMBLOCK_NONE memory
On 29.09.21 18:25, Mike Rapoport wrote:
On Mon, Sep 27, 2021 at 05:05:16PM +0200, David Hildenbrand wrote:
We want to specify flags when hotplugging memory. Let's prepare to pass
flags to memblock_add_node() by adjusting all existing users.
Note that when hotplugging memory the system
Intended subject was "[PATCH v1 0/4] mm/memory_hotplug: full support for
add_memory_driver_managed() with CONFIG_ARCH_KEEP_MEMBLOCK"
--
Thanks,
David / dhildenb
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
Let's communicate driver-managed regions to memblock, to properly
teach kexec_file with CONFIG_ARCH_KEEP_MEMBLOCK to not place images on
these memory regions.
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/mm
. This prepares architectures
that need CONFIG_ARCH_KEEP_MEMBLOCK, such as arm64, for virtio-mem
support.
Signed-off-by: David Hildenbrand
---
include/linux/memblock.h | 16 ++--
kernel/kexec_file.c | 5 +
mm/memblock.c| 4
3 files changed, 23 insertions(+), 2
memblock call.
Signed-off-by: David Hildenbrand
---
arch/arc/mm/init.c | 4 ++--
arch/ia64/mm/contig.c| 2 +-
arch/ia64/mm/init.c | 2 +-
arch/m68k/mm/mcfmmu.c| 3 ++-
arch/m68k/mm/motorola.c | 6 --
arch/mips/loongson64/init.c
the error.
Signed-off-by: David Hildenbrand
---
mm/memory_hotplug.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 9fd0be32a281..917b3528636d 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1384,8 +1384,11
radead.org
David Hildenbrand (4):
mm/memory_hotplug: handle memblock_add_node() failures in
add_memory_resource()
memblock: allow to specify flags with memblock_add_node()
memblock: add MEMBLOCK_DRIVER_MANAGED to mimic
IORESOURCE_SYSRAM_DRIVER_MANAGED
mm/memory_hotplug
e *page)
{
Acked-by: David Hildenbrand
--
Thanks,
David / dhildenb
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
int order)
@@ -7276,7 +7276,7 @@ static void __ref alloc_node_mem_map(struct pglist_data
*pgdat)
pr_debug("%s: node %d, pgdat %08lx, node_mem_map %08lx\n",
__func__, pgdat->node_id, (unsigned long)pgdat,
(unsigned l
N can be used to index
-appropriate `node_mem_map` array to access the `struct page` and
-the offset of the `struct page` from the `node_mem_map` plus
-`node_start_pfn` is the PFN of that page.
-
SPARSEMEM
=
Reviewed-by: David Hildenbrand
--
Thanks,
Dav
) gets
- * optimized to _page_data at compile-time.
+ * For the case of non-NUMA systems the NODE_DATA() gets optimized to
+ * _page_data at compile-time.
*/
static inline struct zonelist *node_zonelist(int nid, gfp_t flags)
{
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhilden
high-order allocations like THP are likely to be
- * unsupported and the premature reclaim offsets the advantage of long-term
- * fragmentation avoidance.
- */
-int watermark_boost_factor __read_mostly;
-#else
int watermark_boost_factor __read_mostly = 15000;
-#endif
int watermark_scale_facto
On 02.06.21 12:53, Mike Rapoport wrote:
From: Mike Rapoport
DISCONTIGMEM was replaced by FLATMEM with freeing of the unused memory map
in v5.11.
Remove the support for DISCONTIGMEM entirely.
Signed-off-by: Mike Rapoport
Acked-by: David Hildenbrand
--
Thanks,
David / dhildenb
node_set_online(1);
Reviewed-by: David Hildenbrand
--
Thanks,
David / dhildenb
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
INITRD_SIZE);
- }
- }
-#endif /* CONFIG_BLK_DEV_INITRD */
-}
-
-void __init paging_init(void)
-{
- unsigned long max_zone_pfn[MAX_NR_ZONES] = {0, };
- unsigned long dma_local_pfn;
-
- /*
-
,
- totalcma_pages << (PAGE_SHIFT - 10),
+ totalcma_pages << (PAGE_SHIFT - 10)
#ifdefCONFIG_HIGHMEM
- totalhigh_pages() << (PAGE_SHIFT - 10),
+ , totalhigh_pages() << (PAGE_SHIFT - 10)
#endif
- str ? ", &
ne))
- continue;
-
Right, pfn_to_online_page() -> pfn_valid() / pfn_valid_within() should
handle that.
Acked-by: David Hildenbrand
--
Thanks,
David / dhildenb
___
linux-snps-arc mailing list
linux-snps-arc@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-snps-arc
On 12.04.20 21:48, Mike Rapoport wrote:
> From: Baoquan He
>
> When called during boot the memmap_init_zone() function checks if each PFN
> is valid and actually belongs to the node being initialized using
> early_pfn_valid() and early_pfn_in_nid().
>
> Each such check may cost up to O(log(n))
On 15.10.19 13:50, David Hildenbrand wrote:
On 15.10.19 13:47, Michal Hocko wrote:
On Tue 15-10-19 13:42:03, David Hildenbrand wrote:
[...]
-static bool pfn_range_valid_gigantic(struct zone *z,
- unsigned long start_pfn, unsigned long nr_pages)
-{
- unsigned long i
On 15.10.19 13:47, Michal Hocko wrote:
On Tue 15-10-19 13:42:03, David Hildenbrand wrote:
[...]
-static bool pfn_range_valid_gigantic(struct zone *z,
- unsigned long start_pfn, unsigned long nr_pages)
-{
- unsigned long i, end_pfn = start_pfn + nr_pages
On 15.10.19 11:21, Anshuman Khandual wrote:
alloc_gigantic_page() implements an allocation method where it scans over
various zones looking for a large contiguous memory block which could not
have been allocated through the buddy allocator. A subsequent patch which
tests arch page table helpers
100 matches
Mail list logo