> > Thank you for noticing this. Not sure how this missmerge happened. I
> > have added the missing case, and VHE is initialized correctly during
> > boot.
> > [ 14.698175] kvm [1]: VHE mode initialized successfully
> >
> > During normal boot, kexec reboot, and kdump reboot. I will respin the
>
On Thu, Apr 8, 2021 at 6:24 AM Marc Zyngier wrote:
>
> On 2021-04-08 05:05, Pavel Tatashin wrote:
> > From: James Morse
> >
> > The hyp-stub's el1_sync code doesn't do very much, this can easily fit
> > in the vectors.
> >
> > With this, all of the hyp-s
on the mapping
it is executing from.
The makes no difference yet as the relocation code runs with the MMU
disabled.
Co-developed-by: James Morse
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/assembler.h | 19 +++
arch/arm64/include/asm/kexec.h | 2 ++
arch/arm64
message]
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/trans_pgd.h | 5 +--
arch/arm64/mm/trans_pgd.c | 57 --
2 files changed, 1 insertion(+), 61 deletions(-)
diff --git a/arch/arm64/include/asm/trans_pgd.h
b/arch/arm64/include/asm/trans_pgd.h
index
This header contains only cpu_soft_restart() which is never used directly
anymore. So, remove this header, and rename the helper to be
cpu_soft_restart().
Suggested-by: James Morse
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/kexec.h| 6 ++
arch/arm64/kernel/cpu-reset.S
Currently, relocation code declares start and end variables
which are used to compute its size.
The better way to do this is to use ld script incited, and put relocation
function in its own section.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/sections.h | 1 +
arch/arm64/kernel
Now that kexec does its relocations with the MMU enabled, we no longer
need to clean the relocation data to the PoC.
Co-developed-by: James Morse
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/machine_kexec.c | 40 ---
1 file changed, 40 deletions(-)
diff
a second.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/kexec.h | 3 +++
arch/arm64/kernel/asm-offsets.c | 1 +
arch/arm64/kernel/machine_kexec.c | 16 ++
arch/arm64/kernel/relocate_kernel.S | 33 +++--
4 files changed, 38 insertions
Since we are going to keep MMU enabled during relocation, we need to
keep EL1 mode throughout the relocation.
Keep EL1 enabled, and switch EL2 only before enterying the new world.
Suggested-by: James Morse
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/cpu-reset.h | 3 +--
arch
If we have a EL2 mode without VHE, the EL2 vectors are needed in order
to switch to EL2 and jump to new world with hypervisor privileges.
In preporation to MMU enabled relocation, configure our EL2 table now.
Suggested-by: James Morse
Signed-off-by: Pavel Tatashin
---
arch/arm64/Kconfig
more arguments: once we enable MMU we
will need to pass information about page tables.
Pass kimage to arm64_relocate_new_kernel, and teach it to get the
required fields from kimage.
Suggested-by: James Morse
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/asm-offsets.c | 7 +++
arch
on
the eye.
Code generated by the existing callers is unchanged.
Signed-off-by: James Morse
[Fixed merging issues]
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/assembler.h | 12
arch/arm64/kernel/relocate_kernel.S | 13 +++--
2 files changed, 11 insertions
Currently, during kexec load we are copying relocation function and
flushing it. However, we can also flush kexec relocation buffers and
if new kernel image is already in place (i.e. crash kernel), we can
also flush the new kernel image itself.
Signed-off-by: Pavel Tatashin
---
arch/arm64
In case of kdump or when segments are already in place the relocation
is not needed, therefore the setup of relocation function and call to
it can be skipped.
Signed-off-by: Pavel Tatashin
Suggested-by: James Morse
---
arch/arm64/kernel/machine_kexec.c | 34
Currently, only hibernate sets custom ttbr0 with safe idmaped function.
Kexec, is also going to be using this functinality when relocation code
is going to be idmapped.
Move the setup seqeuence to a dedicated cpu_install_ttbr0() for custom
ttbr0.
Suggested-by: James Morse
Signed-off-by: Pavel
Users of trans_pgd may also need a copy of vector table because it is
also may be overwritten if a linear map can be overwritten.
Move setup of EL2 vectors from hibernate to trans_pgd, so it can be
later shared with kexec as well.
Suggested-by: James Morse
Signed-off-by: Pavel Tatashin
Replace places that contain logic like this:
is_hyp_mode_available() && !is_kernel_in_hyp_mode()
With a dedicated boolean function is_hyp_callable(). This will be needed
later in kexec in order to sooner switch back to EL2.
Suggested-by: James Morse
Signed-off-by: Pavel
-off-by: James Morse
[Fixed merging issues]
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/hyp-stub.S | 59 ++--
1 file changed, 29 insertions(+), 30 deletions(-)
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index ff329c5c074d
within
its vectors.
Signed-off-by: James Morse
[Fixed merging issues]
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/hyp-stub.S | 56 +++-
1 file changed, 23 insertions(+), 33 deletions(-)
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp
a build check that we didn't overflow 2K.
Signed-off-by: James Morse
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/hyp-stub.S | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index 5eccbd62fec8
Morse (4):
arm64: hyp-stub: Check the size of the HYP stub's vectors
arm64: hyp-stub: Move invalid vector entries into the vectors
arm64: hyp-stub: Move el1_sync into the vectors
arm64: kexec: Use dcache ops macros instead of open-coding
Pavel Tatashin (13):
arm64: kernel: add helper
> > Andrew, since "mm cma: rename PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN" is
> > not yet in the mainline, should I send a new version of this patch so
> > we won't have bisecting problems in the future?
>
> I've already added Mike's fix, as
>
On Wed, Mar 31, 2021 at 12:38 PM Mike Rapoport wrote:
>
> From: Mike Rapoport
>
> The renaming of PF_MEMALLOC_NOCMA to PF_MEMALLOC_PIN missed one occurrence
> in mm/hugetlb.c which causes build error:
>
> CC mm/hugetlb.o
> mm/hugetlb.c: In function ‘dequeue_huge_page_node_exact’:
>
ating linear
mapping")
Signed-off-by: Pavel Tatashin
Tested-by: Tyler Hicks
Reviewed-by: Anshuman Khandual
---
arch/arm64/mm/mmu.c | 20 ++--
1 file changed, 18 insertions(+), 2 deletions(-)
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index 6f0648777d34..ee
On Mon, Mar 29, 2021 at 9:51 AM Greg Kroah-Hartman
wrote:
>
> On Mon, Mar 29, 2021 at 03:49:19PM +0200, Ard Biesheuvel wrote:
> > (+ Pavel)
> >
> > On Mon, 29 Mar 2021 at 15:42, Greg Kroah-Hartman
> > wrote:
> > >
> > > On Mon, Mar 29, 2021 at 03:08:52PM +0200, Ard Biesheuvel wrote:
> > > > On
Signed-off-by: Tyler Hicks
This solves the problem that I had in this thread:
https://lore.kernel.org/lkml/ca+ck2bcd13jblmxn2mauryvqgkbs5ic2uqyssxxtccszxcm...@mail.gmail.com/
Thank you Tyler for root causing and finding a proper fix.
Reviewed-by: Pavel Tatashin
> ---
> drivers/nvdimm/reg
tex before calling
> __loop_clr_fd(), so refcnt and LO_FLAGS_AUTOCLEAR check in lo_release
> stay in sync.
There is a race with autoclear logic where use after free may occur as
shown in the above scenario. Do not drop lo->lo_mutex before calling
__loop_clr_fd(), so refcnt and LO_FLAGS_AUTOCLEAR
* In autoclear mode, stop the loop thread
> * and remove configuration after last close.
> */
> __loop_clr_fd(lo, true);
> - return;
> + goto out_unlock;
> } else if (lo->lo_state == Lo_bound) {
> /*
> * Otherwise keep thread (if running) and config,
> --
> 2.17.1
>
LGTM
Reviewed-by: Pavel Tatashin
Thank you,
Pasha
it in ARM64 version of elfcorehdr_read() as well.
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/crash_dump.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/arch/arm64/kernel/crash_dump.c b/arch/arm64/kernel/crash_dump.c
index e6e284265f19..58303a9ec32c 100644
--- a/arch/arm64/kernel
6>[ 70.918725] psci: CPU5 killed (polled 0 ms)
<5>[ 70.919704] CPU6: shutdown
<6>[ 70.920726] psci: CPU6 killed (polled 4 ms)
<5>[ 70.921642] CPU7: shutdown
<6>[ 70.922650] psci: CPU7 killed (polled 0 ms)
Signed-off-by: Pavel Tatashin
Reviewed-by: Kees Cook
pstore.
Previous submissions
v1 https://lore.kernel.org/lkml/20200605194642.62278-1-pasha.tatas...@soleen.com
v2
https://lore.kernel.org/lkml/20210126204125.313820-1-pasha.tatas...@soleen.com
Pavel Tatashin (1):
kexec: dump kmessage before machine_kexec
kernel/kexec_core.c | 2 ++
1 file changed
Hi Will,
Could you please take this patch now that the dependencies landed in
the mainline?
Thank you,
Pasha
On Mon, Feb 22, 2021 at 9:17 AM Pavel Tatashin
wrote:
>
> > Taking that won't help either though, because it will just explode when
> > it meets 'mm' in Linus's tree.
&g
on the mapping
it is executing from.
The makes no difference yet as the relocation code runs with the MMU
disabled.
Co-developed-by: James Morse
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/assembler.h | 19 +++
arch/arm64/include/asm/kexec.h | 2 ++
arch/arm64
.
The time is proportional to the size of relocation, therefore if initramfs
is larger, 100M it could take over a second.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/kexec.h | 3 +++
arch/arm64/kernel/asm-offsets.c | 1 +
arch/arm64/kernel/machine_kexec.c | 16
more arguments: once we enable MMU we
will need to pass information about page tables.
Pass kimage to arm64_relocate_new_kernel, and teach it to get the
required fields from kimage.
Suggested-by: James Morse
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/asm-offsets.c | 7 +++
arch
Now that kexec does its relocations with the MMU enabled, we no longer
need to clean the relocation data to the PoC.
Co-developed-by: James Morse
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/machine_kexec.c | 40 ---
1 file changed, 40 deletions(-)
diff
Since we are going to keep MMU enabled during relocation, we need to
keep EL1 mode throughout the relocation.
Keep EL1 enabled, and switch EL2 only before enterying the new world.
Suggested-by: James Morse
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/cpu-reset.h | 3 +--
arch
on
the eye.
Code generated by the existing callers is unchanged.
Signed-off-by: James Morse
[Fixed merging issues]
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/assembler.h | 12
arch/arm64/kernel/relocate_kernel.S | 13 +++--
2 files changed, 11 insertions
If we have a EL2 mode without VHE, the EL2 vectors are needed in order
to switch to EL2 and jump to new world with hypervisor privileges.
In preporation to MMU enabled relocation, configure our EL2 table now.
Suggested-by: James Morse
Signed-off-by: Pavel Tatashin
---
arch/arm64/Kconfig
This header contains only cpu_soft_restart() which is never used directly
anymore. So, remove this header, and rename the helper to be
cpu_soft_restart().
Suggested-by: James Morse
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/kexec.h| 6 ++
arch/arm64/kernel/cpu-reset.S
Currently, relocation code declares start and end variables
which are used to compute its size.
The better way to do this is to use ld script incited, and put relocation
function in its own section.
Signed-off-by: Pavel Tatashin
---
arch/arm64/include/asm/sections.h | 1 +
arch/arm64/kernel
In case of kdump or when segments are already in place the relocation
is not needed, therefore the setup of relocation function and call to
it can be skipped.
Signed-off-by: Pavel Tatashin
Suggested-by: James Morse
---
arch/arm64/kernel/machine_kexec.c | 34
Currently, only hibernate sets custom ttbr0 with safe idmaped function.
Kexec, is also going to be using this functinality when relocation code
is going to be idmapped.
Move the setup seqeuence to a dedicated cpu_install_ttbr0() for custom
ttbr0.
Suggested-by: James Morse
Signed-off-by: Pavel
Currently, during kexec load we are copying relocation function and
flushing it. However, we can also flush kexec relocation buffers and
if new kernel image is already in place (i.e. crash kernel), we can
also flush the new kernel image itself.
Signed-off-by: Pavel Tatashin
---
arch/arm64
Users of trans_pgd may also need a copy of vector table because it is
also may be overwritten if a linear map can be overwritten.
Move setup of EL2 vectors from hibernate to trans_pgd, so it can be
later shared with kexec as well.
Suggested-by: James Morse
Signed-off-by: Pavel Tatashin
-off-by: James Morse
[Fixed merging issues]
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/hyp-stub.S | 59 ++--
1 file changed, 29 insertions(+), 30 deletions(-)
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index ff329c5c074d
Replace places that contain logic like this:
is_hyp_mode_available() && !is_kernel_in_hyp_mode()
With a dedicated boolean function is_hyp_callable(). This will be needed
later in kexec in order to sooner switch back to EL2.
Suggested-by: James Morse
Signed-off-by: Pavel
within
its vectors.
Signed-off-by: James Morse
[Fixed merging issues]
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/hyp-stub.S | 56 +++-
1 file changed, 23 insertions(+), 33 deletions(-)
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp
ps://lore.kernel.org/lkml/20190801152439.11363-1-pasha.tatas...@soleen.com
James Morse (4):
arm64: hyp-stub: Check the size of the HYP stub's vectors
arm64: hyp-stub: Move invalid vector entries into the vectors
arm64: hyp-stub: Move el1_sync into the vectors
arm64: kexec: Use dcache ops
a build check that we didn't overflow 2K.
Signed-off-by: James Morse
Signed-off-by: Pavel Tatashin
---
arch/arm64/kernel/hyp-stub.S | 6 +-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index 5eccbd62fec8
> Taking that won't help either though, because it will just explode when
> it meets 'mm' in Linus's tree.
>
> So here's what I think we need to do:
>
> - I'll apply your v3 at -rc1
> - You can send backports based on your -v2 for stable once the v3 has
> been merged upstream.
>
> Sound
On Fri, Feb 19, 2021 at 2:14 PM Will Deacon wrote:
>
> On Fri, Feb 19, 2021 at 02:06:31PM -0500, Pavel Tatashin wrote:
> > On Fri, Feb 19, 2021 at 12:53 PM Will Deacon wrote:
> > >
> > > On Mon, Feb 15, 2021 at 01:59:08PM -0500, Pavel Tatashin wrote:
> > >
]
int machine_kexec_post_load(struct kimage *kimage)
Reported-by: kernel test robot
Link:
https://lore.kernel.org/linux-arm-kernel/202102030727.gqtokach-...@intel.com
Signed-off-by: Pavel Tatashin
---
include/linux/kexec.h | 2 ++
kernel/kexec_internal.h | 2 --
2 files changed, 2 insertions
On Fri, Feb 19, 2021 at 2:18 PM Will Deacon wrote:
>
> On Tue, Feb 16, 2021 at 10:03:51AM -0500, Pavel Tatashin wrote:
> > Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the
> > linear map range is not checked correctly.
> >
> > The start ph
On Fri, Feb 19, 2021 at 12:53 PM Will Deacon wrote:
>
> On Mon, Feb 15, 2021 at 01:59:08PM -0500, Pavel Tatashin wrote:
> > machine_kexec_post_load() is called after kexec load is finished. It must
> > be declared in public header not in kexec_internal.h
>
> Could you pr
_pa);
Fixes a hotplug error that may occur on systems with CONFIG_RANDOMIZE_BASE
enabled.
Applies against linux-next.
v1:
https://lore.kernel.org/lkml/20210213012316.1525419-1-pasha.tatas...@soleen.com
v2:
https://lore.kernel.org/lkml/20210215192237.362706-1-pasha.tatas...@soleen.com
Pavel Tatas
on QEMU with setting kaslr-seed to ~0ul:
memstart_offset_seed = 0x
START: __pa(_PAGE_OFFSET(vabits_actual)) = 9000c000
END: __pa(PAGE_END - 1) = 1000bfff
Signed-off-by: Pavel Tatashin
Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear
mapping&quo
> There is a new generic framework which expects the platform to provide two
> distinct range points (low and high) for hotplug address comparison. Those
> range points can be different depending on whether address randomization
> is enabled and the flip occurs. But this comparison here in the
On Tue, Feb 16, 2021 at 2:36 AM Ard Biesheuvel wrote:
>
> On Tue, 16 Feb 2021 at 04:12, Anshuman Khandual
> wrote:
> >
> >
> >
> > On 2/16/21 1:21 AM, Pavel Tatashin wrote:
> > > On Mon, Feb 15, 2021 at 2:34 PM Ard Biesheuvel wrote:
> > >&g
> >
> > Btw, the KASLR check is incorrect: memstart_addr could also be
> > negative when running the 52-bit VA kernel on hardware that is only
> > 48-bit VA capable.
>
> Good point!
>
> if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52) && (vabits_actual != 52))
> memstart_addr -= _PAGE_OFFSET(48) -
On Mon, Feb 15, 2021 at 2:34 PM Ard Biesheuvel wrote:
>
> On Mon, 15 Feb 2021 at 20:30, Pavel Tatashin
> wrote:
> >
> > > Can't we simply use signed arithmetic here? This expression works fine
> > > if the quantities are all interpreted as s64 instead of
> Can't we simply use signed arithmetic here? This expression works fine
> if the quantities are all interpreted as s64 instead of u64
I was thinking about that, but I do not like the idea of using sign
arithmetics for physical addresses. Also, I am worried that someone in
the future will
on QEMU with setting kaslr-seed to ~0ul:
memstart_offset_seed = 0x
START: __pa(_PAGE_OFFSET(vabits_actual)) = 9000c000
END: __pa(PAGE_END - 1) = 1000bfff
Signed-off-by: Pavel Tatashin
Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear
mapping&quo
led.
v1:
https://lore.kernel.org/lkml/20210213012316.1525419-1-pasha.tatas...@soleen.com
Pavel Tatashin (1):
arm64: mm: correct the inside linear map boundaries during hotplug
check
arch/arm64/mm/mmu.c | 20 ++--
1 file changed, 18 insertions(+), 2 deletions(-)
--
2.25.1
This is against for-next/kexec, fix for machine_kexec_post_load
warning.
Reported by kernel test robot [1].
[1] https://lore.kernel.org/linux-arm-kernel/202102030727.gqtokach-...@intel.com
Pavel Tatashin (1):
kexec: move machine_kexec_post_load() to public interface
include/linux/kexec.h
machine_kexec_post_load() is called after kexec load is finished. It must
be declared in public header not in kexec_internal.h
Reported-by: kernel test robot
Signed-off-by: Pavel Tatashin
---
include/linux/kexec.h | 2 ++
kernel/kexec_internal.h | 2 --
2 files changed, 2 insertions(+), 2
Document the special handling of page pinning when ZONE_MOVABLE present.
Signed-off-by: Pavel Tatashin
Suggested-by: David Hildenbrand
Acked-by: Michal Hocko
---
Documentation/admin-guide/mm/memory-hotplug.rst | 9 +
1 file changed, 9 insertions(+)
diff --git a/Documentation/admin
is declared in memory.c which is compiled
with CONFIG_MMU.
Signed-off-by: Pavel Tatashin
---
include/linux/mm.h | 3 ++-
include/linux/mmzone.h | 4
include/linux/pgtable.h | 12
3 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include
We should not pin pages in ZONE_MOVABLE. Currently, we do not pin only
movable CMA pages. Generalize the function that migrates CMA pages to
migrate all movable pages. Use is_pinnable_page() to check which
pages need to be migrated
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
In __get_user_pages_locked() i counts number of pages which should be
long, as long is used in all other places to contain number of pages, and
32-bit becomes increasingly small for handling page count proportional
values.
Signed-off-by: Pavel Tatashin
Acked-by: Michal Hocko
---
mm/gup.c | 2
When pages are longterm pinned, we must migrated them out of movable zone.
The function that migrates them has a hidden loop with goto. The loop is
to retry on isolation failures, and after successful migration.
Make this code better by moving this loop to the caller.
Signed-off-by: Pavel
difference between pin vs no-pin case.
Also change type of nr from int to long, as it counts number of pages.
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
---
mm/gup_test.c | 23 ++-
mm/gup_test.h | 3 ++-
too
ks between doing gup/pup on pages that have been
pre-faulted in from user space, vs. doing gup/pup on pages that are not
faulted in until gup/pup time (via FOLL_TOUCH). This decision is
controlled with the new -z command line option.
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
---
mm/
hen we add the zone constrain logic to current_gfp_context() we
will be able to remove current->flags load from current_alloc_flags, and
therefore return fast-path to the current performance level.
Suggested-by: Michal Hocko
Signed-off-by: Pavel Tatashin
Acked-by: Michal Hocko
---
mm/page_all
is not in ZONE_MOVABLE and not of MIGRATE_CMA type.
Signed-off-by: Pavel Tatashin
Acked-by: Michal Hocko
---
include/linux/mm.h | 18 ++
include/linux/sched/mm.h | 6 +-
mm/hugetlb.c | 2 +-
mm/page_alloc.c | 20 +---
4 files changed, 33
context which can only get pages suitable for long-term pins.
Also re-name:
memalloc_nocma_save()/memalloc_nocma_restore
to
memalloc_pin_save()/memalloc_pin_restore()
and make the new functions common.
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
Acked-by: Michal Hocko
---
include
are transient, we retry indefinitely.
Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages
allocated from CMA region")
Signed-off-by: Pavel Tatashin
Reviewed-by: Jason Gunthorpe
---
mm/gup.c | 60
1 file c
-by: Pavel Tatashin
Reviewed-by: Jason Gunthorpe
---
mm/gup.c | 17 +++--
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index 11ca49f3f11d..2d0292980b1d 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1550,7 +1550,6 @@ static long
the addresses.
The resulting pages[i] might end-up having pages which are not compound
size page aligned.
Fixes: aa712399c1e8 ("mm/gup: speed up check_and_migrate_cma_pages() on huge
page")
Reported-by: Jason Gunthorpe
Signed-off-by: Pavel Tatashin
Reviewed-by: Jason Gunthorpe
---
mm/
.kernel.org/lkml/20210201153827.444374-1-pasha.tatas...@soleen.com
v10
https://lore.kernel.org/lkml/20210211162427.618913-1-pasha.tatas...@soleen.com
Pavel Tatashin (14):
mm/gup: don't pin migrated cma pages in movable zone
mm/gup: check every subpage of a compound page during isolation
mm/gup: return an er
In order not to fragment CMA the pinned pages are migrated. However,
they are migrated to ZONE_MOVABLE, which also should not have pinned pages.
Remove __GFP_MOVABLE, so pages can be migrated to zones where pinning
is allowed.
Signed-off-by: Pavel Tatashin
Reviewed-by: David Hildenbrand
On Mon, Feb 15, 2021 at 12:26 AM Anshuman Khandual
wrote:
>
> Hello Pavel,
>
> On 2/13/21 6:53 AM, Pavel Tatashin wrote:
> > Memory hotplug may fail on systems with CONFIG_RANDOMIZE_BASE because the
> > linear map range is not checked correctly.
> >
> > The st
> We're ignoring the portion from the linear mapping's start PA to the
> point of wraparound. Could the start and end of the hot plugged memory
> fall within this range and, as a result, the hot plug operation be
> incorrectly blocked?
Hi Tyler,
Thank you for looking at this fix. The maximum
on QEMU with setting kaslr-seed to ~0ul:
memstart_offset_seed = 0x
START: __pa(_PAGE_OFFSET(vabits_actual)) = 9000c000
END: __pa(PAGE_END - 1) = 1000bfff
Signed-off-by: Pavel Tatashin
Fixes: 58284a901b42 ("arm64/mm: Validate hotplug range before creating linear
mapping")
ks between doing gup/pup on pages that have been
pre-faulted in from user space, vs. doing gup/pup on pages that are not
faulted in until gup/pup time (via FOLL_TOUCH). This decision is
controlled with the new -z command line option.
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
---
mm/
difference between pin vs no-pin case.
Also change type of nr from int to long, as it counts number of pages.
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
---
mm/gup_test.c | 23 ++-
mm/gup_test.h | 3 ++-
too
When pages are longterm pinned, we must migrated them out of movable zone.
The function that migrates them has a hidden loop with goto. The loop is
to retry on isolation failures, and after successful migration.
Make this code better by moving this loop to the caller.
Signed-off-by: Pavel
hen we add the zone constrain logic to current_gfp_context() we
will be able to remove current->flags load from current_alloc_flags, and
therefore return fast-path to the current performance level.
Suggested-by: Michal Hocko
Signed-off-by: Pavel Tatashin
Acked-by: Michal Hocko
---
mm/page_all
is not in ZONE_MOVABLE and not of MIGRATE_CMA type.
Signed-off-by: Pavel Tatashin
Acked-by: Michal Hocko
---
include/linux/mm.h | 18 ++
include/linux/sched/mm.h | 6 +-
mm/hugetlb.c | 2 +-
mm/page_alloc.c | 20 +---
4 files changed, 33
We should not pin pages in ZONE_MOVABLE. Currently, we do not pin only
movable CMA pages. Generalize the function that migrates CMA pages to
migrate all movable pages. Use is_pinnable_page() to check which
pages need to be migrated
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
is declared in memory.c which is compiled
with CONFIG_MMU.
Signed-off-by: Pavel Tatashin
---
include/linux/mm.h | 3 ++-
include/linux/mmzone.h | 4
include/linux/pgtable.h | 12
3 files changed, 18 insertions(+), 1 deletion(-)
diff --git a/include/linux/mm.h b/include
Document the special handling of page pinning when ZONE_MOVABLE present.
Signed-off-by: Pavel Tatashin
Suggested-by: David Hildenbrand
Acked-by: Michal Hocko
---
Documentation/admin-guide/mm/memory-hotplug.rst | 9 +
1 file changed, 9 insertions(+)
diff --git a/Documentation/admin
context which can only get pages suitable for long-term pins.
Also re-name:
memalloc_nocma_save()/memalloc_nocma_restore
to
memalloc_pin_save()/memalloc_pin_restore()
and make the new functions common.
Signed-off-by: Pavel Tatashin
Reviewed-by: John Hubbard
Acked-by: Michal Hocko
---
include
In __get_user_pages_locked() i counts number of pages which should be
long, as long is used in all other places to contain number of pages, and
32-bit becomes increasingly small for handling page count proportional
values.
Signed-off-by: Pavel Tatashin
Acked-by: Michal Hocko
---
mm/gup.c | 2
.kernel.org/lkml/20210122033748.924330-1-pasha.tatas...@soleen.com
v8
https://lore.kernel.org/lkml/20210125194751.1275316-1-pasha.tatas...@soleen.com
v9
https://lore.kernel.org/lkml/20210201153827.444374-1-pasha.tatas...@soleen.com
Pavel Tatashin (14):
mm/gup: don't pin migrated cma pages in movab
the addresses.
The resulting pages[i] might end-up having pages which are not compound
size page aligned.
Fixes: aa712399c1e8 ("mm/gup: speed up check_and_migrate_cma_pages() on huge
page")
Reported-by: Jason Gunthorpe
Signed-off-by: Pavel Tatashin
Reviewed-by: Jason Gunthorpe
---
mm/
are transient, we retry indefinitely.
Fixes: 9a4e9f3b2d73 ("mm: update get_user_pages_longterm to migrate pages
allocated from CMA region")
Signed-off-by: Pavel Tatashin
Reviewed-by: Jason Gunthorpe
---
mm/gup.c | 60
1 file c
In order not to fragment CMA the pinned pages are migrated. However,
they are migrated to ZONE_MOVABLE, which also should not have pinned pages.
Remove __GFP_MOVABLE, so pages can be migrated to zones where pinning
is allowed.
Signed-off-by: Pavel Tatashin
Reviewed-by: David Hildenbrand
-by: Pavel Tatashin
Reviewed-by: Jason Gunthorpe
---
mm/gup.c | 17 +++--
1 file changed, 7 insertions(+), 10 deletions(-)
diff --git a/mm/gup.c b/mm/gup.c
index 1f73cbf7fb37..eb8c39953d53 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1549,7 +1549,6 @@ static long
I would like to start a discussion about how we can improve Linux
crash dump facility, and use warm reboot / firmware assistance in
order to more reliably collect crash dumps while using fewer
memory resources and being more performant.
Currently, the main way to collect crash dumps on Linux is
1 - 100 of 2213 matches
Mail list logo