Re: [PATCH v10 16/18] arm64: kexec: configure trans_pgd page table for kexec

2021-01-25 Thread Pavel Tatashin
I forgot to make changes to arch/arm64/Kconfig. The correct patch is
below.

---

>From a2bc374320d7c7efd3c40644ad3d6d59a024b301 Mon Sep 17 00:00:00 2001
From: Pavel Tatashin 
Date: Mon, 29 Jul 2019 21:24:25 -0400
Subject: [PATCH v10 16/18] arm64: kexec: configure trans_pgd page table for
 kexec

Configure a page table located in kexec-safe memory that has
the following mappings:

1. identity mapping for text of relocation function with executable
   permission.
2. va mappings for all source ranges
3. va mappings for all destination ranges.

Signed-off-by: Pavel Tatashin 
---
 arch/arm64/Kconfig|  2 +-
 arch/arm64/include/asm/kexec.h| 12 
 arch/arm64/kernel/asm-offsets.c   |  6 ++
 arch/arm64/kernel/machine_kexec.c | 91 ++-
 4 files changed, 109 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index fc0ed9d6e011..440abd0c0ee1 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1134,7 +1134,7 @@ config CRASH_DUMP
 
 config TRANS_TABLE
def_bool y
-   depends on HIBERNATION
+   depends on HIBERNATION || KEXEC_CORE
 
 config XEN_DOM0
def_bool y
diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
index b96d8a6aac80..049cde429b1b 100644
--- a/arch/arm64/include/asm/kexec.h
+++ b/arch/arm64/include/asm/kexec.h
@@ -105,6 +105,12 @@ extern const char arm64_kexec_el2_vectors[];
  * el2_vector  If present means that relocation routine will go to EL1
  * from EL2 to do the copy, and then back to EL2 to do the jump
  * to new world.
+ * trans_ttbr0 idmap for relocation function and its argument
+ * trans_ttbr1 map for source/destination addresses.
+ * trans_t0sz  t0sz for idmap page in trans_ttbr0
+ * src_addrstart address for source pages.
+ * dst_addrstart address for destination pages.
+ * copy_lenNumber of bytes that need to be copied
  */
 struct kern_reloc_arg {
phys_addr_t head;
@@ -114,6 +120,12 @@ struct kern_reloc_arg {
phys_addr_t kern_arg2;
phys_addr_t kern_arg3;
phys_addr_t el2_vector;
+   phys_addr_t trans_ttbr0;
+   phys_addr_t trans_ttbr1;
+   unsigned long trans_t0sz;
+   unsigned long src_addr;
+   unsigned long dst_addr;
+   unsigned long copy_len;
 };
 
 #define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 8a9475be1b62..06278611451d 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -160,6 +160,12 @@ int main(void)
   DEFINE(KEXEC_KRELOC_KERN_ARG2,   offsetof(struct kern_reloc_arg, 
kern_arg2));
   DEFINE(KEXEC_KRELOC_KERN_ARG3,   offsetof(struct kern_reloc_arg, 
kern_arg3));
   DEFINE(KEXEC_KRELOC_EL2_VECTOR,  offsetof(struct kern_reloc_arg, 
el2_vector));
+  DEFINE(KEXEC_KRELOC_TRANS_TTBR0, offsetof(struct kern_reloc_arg, 
trans_ttbr0));
+  DEFINE(KEXEC_KRELOC_TRANS_TTBR1, offsetof(struct kern_reloc_arg, 
trans_ttbr1));
+  DEFINE(KEXEC_KRELOC_TRANS_T0SZ,  offsetof(struct kern_reloc_arg, 
trans_t0sz));
+  DEFINE(KEXEC_KRELOC_SRC_ADDR,offsetof(struct kern_reloc_arg, 
src_addr));
+  DEFINE(KEXEC_KRELOC_DST_ADDR,offsetof(struct kern_reloc_arg, 
dst_addr));
+  DEFINE(KEXEC_KRELOC_COPY_LEN,offsetof(struct kern_reloc_arg, 
copy_len));
 #endif
   return 0;
 }
diff --git a/arch/arm64/kernel/machine_kexec.c 
b/arch/arm64/kernel/machine_kexec.c
index 41d1e3ca13f8..dc1b7e5a54fb 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "cpu-reset.h"
 
@@ -71,11 +72,91 @@ static void *kexec_page_alloc(void *arg)
return page_address(page);
 }
 
+/*
+ * Map source segments starting from src_va, and map destination
+ * segments starting from dst_va, and return size of copy in
+ * *copy_len argument.
+ * Relocation function essentially needs to do:
+ * memcpy(dst_va, src_va, copy_len);
+ */
+static int map_segments(struct kimage *kimage, pgd_t *pgdp,
+   struct trans_pgd_info *info,
+   unsigned long src_va,
+   unsigned long dst_va,
+   unsigned long *copy_len)
+{
+   unsigned long *ptr = 0;
+   unsigned long dest = 0;
+   unsigned long len = 0;
+   unsigned long entry, addr;
+   int rc;
+
+   for (entry = kimage->head; !(entry & IND_DONE); entry = *ptr++) {
+   addr = entry & PAGE_MASK;
+
+   switch (entry & IND_FLAGS) {
+   case IND_DESTINATION:
+   dest = addr;
+   break;
+   case IND_INDIRECTION:
+   ptr = __va(addr);
+   if (rc)
+   return rc;
+   break;
+   case IND_SOURCE:
+   rc = trans_pgd_map_page(info, pgdp, __va(a

Re: Issue in dmesg time with lockless ring buffer

2021-01-25 Thread J. Avila
Hello,

This dmesg uses /dev/kmsg; we've verified that we don't see this long
dmesg time when reading from syslog (via dmesg -S).

We've also tried testing this with logging daemons disabled as well as
within initrd - both result in similar behavior.

If it's relevant, this was done on a toybox shell.

Thanks,

Avila

On Mon, Jan 25, 2021 at 5:32 AM John Ogness  wrote:
>
> On 2021-01-22, "J. Avila"  wrote:
> > When doing some internal testing on a 5.10.4 kernel, we found that the
> > time taken for dmesg seemed to increase from the order of milliseconds
> > to the order of seconds when the dmesg size approached the ~1.2MB
> > limit. After doing some digging, we found that by reverting all of the
> > patches in printk/ up to and including
> > 896fbe20b4e2333fb55cc9b9b783ebcc49eee7c7 ("use the lockless
> > ringbuffer"), we were able to once more see normal dmesg times.
> >
> > This kernel had no meaningful diffs in the printk/ dir when compared
> > to Linus' tree. This behavior was consistently reproducible using the
> > following steps:
> >
> > 1) In one shell, run "time dmesg > /dev/null"
> > 2) In another, constantly write to /dev/kmsg
> >
> > Within ~5 minutes, we saw that dmesg times increased to 1 second, only
> > increasing further from there. Is this a known issue?
>
> The last couple days I have tried to reproduce this issue with no
> success.
>
> Is your dmesg using /dev/kmsg or syslog() to read the buffer?
>
> Are there any syslog daemons or systemd running? Perhaps you can run
> your test within an initrd to see if this effect is still visible?
>
> John Ogness

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH v10 12/18] arm64: kexec: arm64_relocate_new_kernel don't use x0 as temp

2021-01-25 Thread Pavel Tatashin
x0 will contain the only argument to arm64_relocate_new_kernel; don't
use it as a temp. Reassigned registers to free-up x0 so we won't need
to copy argument, and can use it at the beginning and at the end of the
function.

Signed-off-by: Pavel Tatashin 
Reviewed-by: James Morse 
---
 arch/arm64/kernel/relocate_kernel.S | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/arch/arm64/kernel/relocate_kernel.S 
b/arch/arm64/kernel/relocate_kernel.S
index 462ffbc37071..b78ea5de97a4 100644
--- a/arch/arm64/kernel/relocate_kernel.S
+++ b/arch/arm64/kernel/relocate_kernel.S
@@ -34,7 +34,7 @@ SYM_CODE_START(arm64_relocate_new_kernel)
mov x13, xzr/* x13 = copy dest */
/* Check if the new image needs relocation. */
tbnzx16, IND_DONE_BIT, .Ldone
-   raw_dcache_line_size x15, x0/* x15 = dcache line size */
+   raw_dcache_line_size x15, x1/* x15 = dcache line size */
 .Lloop:
and x12, x16, PAGE_MASK /* x12 = addr */
 
@@ -43,17 +43,17 @@ SYM_CODE_START(arm64_relocate_new_kernel)
tbz x16, IND_SOURCE_BIT, .Ltest_indirection
 
/* Invalidate dest page to PoC. */
-   mov x0, x13
-   add x20, x0, #PAGE_SIZE
+   mov x2, x13
+   add x20, x2, #PAGE_SIZE
sub x1, x15, #1
-   bic x0, x0, x1
-2: dc  ivac, x0
-   add x0, x0, x15
-   cmp x0, x20
+   bic x2, x2, x1
+2: dc  ivac, x2
+   add x2, x2, x15
+   cmp x2, x20
b.lo2b
dsb sy
 
-   copy_page x13, x12, x0, x1, x2, x3, x4, x5, x6, x7
+   copy_page x13, x12, x1, x2, x3, x4, x5, x6, x7, x8
b   .Lnext
 .Ltest_indirection:
tbz x16, IND_INDIRECTION_BIT, .Ltest_destination
-- 
2.25.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH v10 13/18] arm64: kexec: add expandable argument to relocation function

2021-01-25 Thread Pavel Tatashin
Currently, kexec relocation function (arm64_relocate_new_kernel) accepts
the following arguments:

head:   start of array that contains relocation information.
entry:  entry point for new kernel or purgatory.
dtb_mem:first and only argument to entry.

The number of arguments cannot be easily expended, because this
function is also called from HVC_SOFT_RESTART, which preserves only
three arguments (hypervisor abi). And, also arm64_relocate_new_kernel is
written in assembly but called without stack, thus no place to move extra
arguments to free registers.

Soon, we will need to pass more arguments: once we enable MMU we
will need to pass information about page tables.

Add a new struct: kern_reloc_arg, and place it in kexec safe page (i.e
memory that is not overwritten during relocation).
Thus, make arm64_relocate_new_kernel to only take one argument, that
contains all the needed information.

Note:
Another benefit of allowing this function to accept more arguments, is that
kernel can actually accept up to 4 arguments (x0-x3), however currently
only one is used, but if in the future we will need for more (for example,
pass information about when previous kernel exited to have a precise
measurement in time spent in purgatory), we won't be easilty do that
if arm64_relocate_new_kernel can't accept more arguments.

Signed-off-by: Pavel Tatashin 
---
 arch/arm64/include/asm/kexec.h  | 18 ++
 arch/arm64/kernel/asm-offsets.c |  9 +
 arch/arm64/kernel/cpu-reset.S   | 11 +++
 arch/arm64/kernel/cpu-reset.h   |  8 +++-
 arch/arm64/kernel/machine_kexec.c   | 27 +--
 arch/arm64/kernel/relocate_kernel.S | 21 -
 6 files changed, 66 insertions(+), 28 deletions(-)

diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
index 9befcd87e9a8..990185744148 100644
--- a/arch/arm64/include/asm/kexec.h
+++ b/arch/arm64/include/asm/kexec.h
@@ -90,12 +90,30 @@ static inline void crash_prepare_suspend(void) {}
 static inline void crash_post_resume(void) {}
 #endif
 
+/*
+ * kern_reloc_arg is passed to kernel relocation function as an argument.
+ * headkimage->head, allows to traverse through relocation 
segments.
+ * entry_addr  kimage->start, where to jump from relocation function (new
+ * kernel, or purgatory entry address).
+ * kern_arg0   first argument to kernel is its dtb address. The other
+ * arguments are currently unused, and must be set to 0
+ */
+struct kern_reloc_arg {
+   phys_addr_t head;
+   phys_addr_t entry_addr;
+   phys_addr_t kern_arg0;
+   phys_addr_t kern_arg1;
+   phys_addr_t kern_arg2;
+   phys_addr_t kern_arg3;
+};
+
 #define ARCH_HAS_KIMAGE_ARCH
 
 struct kimage_arch {
void *dtb;
phys_addr_t dtb_mem;
phys_addr_t kern_reloc;
+   phys_addr_t kern_reloc_arg;
/* Core ELF header buffer */
void *elf_headers;
unsigned long elf_headers_mem;
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 301784463587..6067a288f568 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -23,6 +23,7 @@
 #include 
 #include 
 #include 
+#include 
 
 int main(void)
 {
@@ -150,6 +151,14 @@ int main(void)
   DEFINE(PTRAUTH_USER_KEY_APGA,offsetof(struct 
ptrauth_keys_user, apga));
   DEFINE(PTRAUTH_KERNEL_KEY_APIA,  offsetof(struct ptrauth_keys_kernel, 
apia));
   BLANK();
+#endif
+#ifdef CONFIG_KEXEC_CORE
+  DEFINE(KEXEC_KRELOC_HEAD,offsetof(struct kern_reloc_arg, head));
+  DEFINE(KEXEC_KRELOC_ENTRY_ADDR,  offsetof(struct kern_reloc_arg, 
entry_addr));
+  DEFINE(KEXEC_KRELOC_KERN_ARG0,   offsetof(struct kern_reloc_arg, 
kern_arg0));
+  DEFINE(KEXEC_KRELOC_KERN_ARG1,   offsetof(struct kern_reloc_arg, 
kern_arg1));
+  DEFINE(KEXEC_KRELOC_KERN_ARG2,   offsetof(struct kern_reloc_arg, 
kern_arg2));
+  DEFINE(KEXEC_KRELOC_KERN_ARG3,   offsetof(struct kern_reloc_arg, 
kern_arg3));
 #endif
   return 0;
 }
diff --git a/arch/arm64/kernel/cpu-reset.S b/arch/arm64/kernel/cpu-reset.S
index 37721eb6f9a1..bbf70db43744 100644
--- a/arch/arm64/kernel/cpu-reset.S
+++ b/arch/arm64/kernel/cpu-reset.S
@@ -16,14 +16,11 @@
 .pushsection.idmap.text, "awx"
 
 /*
- * __cpu_soft_restart(el2_switch, entry, arg0, arg1, arg2) - Helper for
- * cpu_soft_restart.
+ * __cpu_soft_restart(el2_switch, entry, arg) - Helper for cpu_soft_restart.
  *
  * @el2_switch: Flag to indicate a switch to EL2 is needed.
  * @entry: Location to jump to for soft reset.
- * arg0: First argument passed to @entry. (relocation list)
- * arg1: Second argument passed to @entry.(physical kernel entry)
- * arg2: Third argument passed to @entry. (physical dtb address)
+ * arg: Entry argument
  *
  * Put the CPU into the same state as it would be if it had been reset, and
  * branch to what would be the reset vector. It must 

[PATCH v10 15/18] arm64: kexec: kexec may require EL2 vectors

2021-01-25 Thread Pavel Tatashin
If we have a EL2 mode without VHE, the EL2 vectors are needed in order
to switch to EL2 and jump to new world with hypervisor privileges.

Signed-off-by: Pavel Tatashin 
---
 arch/arm64/include/asm/kexec.h  |  5 +
 arch/arm64/kernel/asm-offsets.c |  1 +
 arch/arm64/kernel/machine_kexec.c   |  9 +++-
 arch/arm64/kernel/relocate_kernel.S | 35 +
 4 files changed, 49 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
index 7f4f9abdf049..b96d8a6aac80 100644
--- a/arch/arm64/include/asm/kexec.h
+++ b/arch/arm64/include/asm/kexec.h
@@ -92,6 +92,7 @@ static inline void crash_post_resume(void) {}
 
 #if defined(CONFIG_KEXEC_CORE)
 extern const char arm64_relocate_new_kernel[];
+extern const char arm64_kexec_el2_vectors[];
 #endif
 
 /*
@@ -101,6 +102,9 @@ extern const char arm64_relocate_new_kernel[];
  * kernel, or purgatory entry address).
  * kern_arg0   first argument to kernel is its dtb address. The other
  * arguments are currently unused, and must be set to 0
+ * el2_vector  If present means that relocation routine will go to EL1
+ * from EL2 to do the copy, and then back to EL2 to do the jump
+ * to new world.
  */
 struct kern_reloc_arg {
phys_addr_t head;
@@ -109,6 +113,7 @@ struct kern_reloc_arg {
phys_addr_t kern_arg1;
phys_addr_t kern_arg2;
phys_addr_t kern_arg3;
+   phys_addr_t el2_vector;
 };
 
 #define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 6067a288f568..8a9475be1b62 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -159,6 +159,7 @@ int main(void)
   DEFINE(KEXEC_KRELOC_KERN_ARG1,   offsetof(struct kern_reloc_arg, 
kern_arg1));
   DEFINE(KEXEC_KRELOC_KERN_ARG2,   offsetof(struct kern_reloc_arg, 
kern_arg2));
   DEFINE(KEXEC_KRELOC_KERN_ARG3,   offsetof(struct kern_reloc_arg, 
kern_arg3));
+  DEFINE(KEXEC_KRELOC_EL2_VECTOR,  offsetof(struct kern_reloc_arg, 
el2_vector));
 #endif
   return 0;
 }
diff --git a/arch/arm64/kernel/machine_kexec.c 
b/arch/arm64/kernel/machine_kexec.c
index 361a4d082093..41d1e3ca13f8 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -75,19 +75,26 @@ int machine_kexec_post_load(struct kimage *kimage)
 {
void *reloc_code = page_to_virt(kimage->control_code_page);
struct kern_reloc_arg *kern_reloc_arg = kexec_page_alloc(kimage);
-   long func_offset, reloc_size;
+   long func_offset, vector_offset, reloc_size;
 
if (!kern_reloc_arg)
return -ENOMEM;
 
func_offset = arm64_relocate_new_kernel - __relocate_new_kernel_start;
reloc_size = __relocate_new_kernel_end - __relocate_new_kernel_start;
+   vector_offset = arm64_kexec_el2_vectors - __relocate_new_kernel_start;
+
memcpy(reloc_code, __relocate_new_kernel_start, reloc_size);
kimage->arch.kern_reloc = __pa(reloc_code) + func_offset;
kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg);
kern_reloc_arg->head = kimage->head;
kern_reloc_arg->entry_addr = kimage->start;
kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem;
+
+   /* Setup vector table only when EL2 is available, but no VHE */
+   if (is_hyp_mode_available() && !is_kernel_in_hyp_mode())
+   kern_reloc_arg->el2_vector = __pa(reloc_code) + vector_offset;
+
kexec_image_info(kimage);
 
/* Flush the reloc_code in preparation for its execution. */
diff --git a/arch/arm64/kernel/relocate_kernel.S 
b/arch/arm64/kernel/relocate_kernel.S
index d2a4a0b0d76b..c6178b1a4e60 100644
--- a/arch/arm64/kernel/relocate_kernel.S
+++ b/arch/arm64/kernel/relocate_kernel.S
@@ -14,6 +14,17 @@
 #include 
 #include 
 
+.macro el1_sync_64
+   .align 7
+   br  x4  /* Jump to new world from el2 */
+.endm
+
+.macro invalid_vector label
+\label:
+   .align 7
+   b \label
+.endm
+
 .pushsection".kexec_relocate.text", "ax"
 /*
  * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it.
@@ -76,4 +87,28 @@ SYM_CODE_START(arm64_relocate_new_kernel)
ldr x0, [x0, #KEXEC_KRELOC_KERN_ARG0]   /* x0 = dtb address */
br  x4
 SYM_CODE_END(arm64_relocate_new_kernel)
+
+/* el2 vectors - switch el2 here while we restore the memory image. */
+   .align 11
+SYM_CODE_START(arm64_kexec_el2_vectors)
+   invalid_vector el2_sync_invalid_sp0 /* Synchronous EL2t */
+   invalid_vector el2_irq_invalid_sp0  /* IRQ EL2t */
+   invalid_vector el2_fiq_invalid_sp0  /* FIQ EL2t */
+   invalid_vector el2_error_invalid_sp0/* Error EL2t */
+
+   invalid_vector el2_sync_invalid_spx /* Synchronous EL2h */
+   invalid_vector el2_irq_invalid_spx  /* IRQ EL2h */
+   invalid_vector el2_fiq_invalid_spx  /* FIQ EL2h */

[PATCH v10 10/18] arm64: kexec: call kexec_image_info only once

2021-01-25 Thread Pavel Tatashin
Currently, kexec_image_info() is called during load time, and
right before kernel is being kexec'ed. There is no need to do both.
So, call it only once when segments are loaded and the physical
location of page with copy of arm64_relocate_new_kernel is known.

Signed-off-by: Pavel Tatashin 
Acked-by: James Morse 
---
 arch/arm64/kernel/machine_kexec.c | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/arch/arm64/kernel/machine_kexec.c 
b/arch/arm64/kernel/machine_kexec.c
index a8aaa6562429..90a335c74442 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -66,6 +66,7 @@ int machine_kexec_post_load(struct kimage *kimage)
memcpy(reloc_code, arm64_relocate_new_kernel,
   arm64_relocate_new_kernel_size);
kimage->arch.kern_reloc = __pa(reloc_code);
+   kexec_image_info(kimage);
 
/* Flush the reloc_code in preparation for its execution. */
__flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size);
@@ -84,8 +85,6 @@ int machine_kexec_post_load(struct kimage *kimage)
  */
 int machine_kexec_prepare(struct kimage *kimage)
 {
-   kexec_image_info(kimage);
-
if (kimage->type != KEXEC_TYPE_CRASH && cpus_are_stuck_in_kernel()) {
pr_err("Can't kexec: CPUs are stuck in the kernel.\n");
return -EBUSY;
@@ -170,8 +169,6 @@ void machine_kexec(struct kimage *kimage)
WARN(in_kexec_crash && (stuck_cpus || smp_crash_stop_failed()),
"Some CPUs may be stale, kdump will be unreliable.\n");
 
-   kexec_image_info(kimage);
-
/* Flush the kimage list and its buffers. */
kexec_list_flush(kimage);
 
-- 
2.25.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH v10 17/18] arm64: kexec: enable MMU during kexec relocation

2021-01-25 Thread Pavel Tatashin
Now, that we have transitional page tables configured, temporarily enable
MMU to allow faster relocation of segments to final destination.

The performance data: for a moderate size kernel + initramfs: 25M the
relocation was taking 0.382s, with enabled MMU it now takes
0.019s only or x20 improvement.

The time is proportional to the size of relocation, therefore if initramfs
is larger, 100M it could take over a second.

Signed-off-by: Pavel Tatashin 
---
 arch/arm64/kernel/relocate_kernel.S | 131 ++--
 1 file changed, 87 insertions(+), 44 deletions(-)

diff --git a/arch/arm64/kernel/relocate_kernel.S 
b/arch/arm64/kernel/relocate_kernel.S
index c6178b1a4e60..9c60981a6911 100644
--- a/arch/arm64/kernel/relocate_kernel.S
+++ b/arch/arm64/kernel/relocate_kernel.S
@@ -4,6 +4,8 @@
  *
  * Copyright (C) Linaro.
  * Copyright (C) Huawei Futurewei Technologies.
+ * Copyright (C) 2020, Microsoft Corporation.
+ * Pavel Tatashin 
  */
 
 #include 
@@ -14,6 +16,54 @@
 #include 
 #include 
 
+.macro tlb_invalidate
+   dsb sy
+   dsb ish
+   tlbivmalle1
+   dsb ish
+   isb
+.endm
+
+.macro turn_off_mmu tmp1, tmp2
+   mrs \tmp1, sctlr_el1
+   mov_q   \tmp2, SCTLR_ELx_FLAGS
+   bic \tmp1, \tmp1, \tmp2
+   pre_disable_mmu_workaround
+   msr sctlr_el1, \tmp1
+   isb
+.endm
+
+.macro turn_on_mmu tmp1, tmp2
+   mrs \tmp1, sctlr_el1
+   mov_q   \tmp2, SCTLR_ELx_FLAGS
+   orr \tmp1, \tmp1, \tmp2
+   msr sctlr_el1, \tmp1
+   ic  iallu
+   dsb nsh
+   isb
+.endm
+
+/*
+ * Set ttbr0 and ttbr1, called while MMU is disabled, so no need to temporarily
+ * set zero_page table. Invalidate TLB after new tables are set.
+ */
+.macro set_ttbr arg, tmp1, tmp2
+   ldr \tmp1, [\arg, #KEXEC_KRELOC_TRANS_TTBR0]
+   msr ttbr0_el1, \tmp1
+   ldr \tmp1, [\arg, #KEXEC_KRELOC_TRANS_TTBR1]
+   offset_ttbr1 \tmp1, \tmp2
+   msr ttbr1_el1, \tmp1
+   isb
+.endm
+
+/* Set T0SZ to match the requirements of idmap page */
+.macro set_tcr_t0sz arg, tmp1, tmp2
+   ldr \tmp2, [\arg, #KEXEC_KRELOC_TRANS_T0SZ]
+   mrs \tmp1, tcr_el1
+   bfi \tmp1, \tmp2, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
+   msr tcr_el1, \tmp1
+.endm
+
 .macro el1_sync_64
.align 7
br  x4  /* Jump to new world from el2 */
@@ -36,56 +86,49 @@
  * symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end.  The
  * machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec
  * safe memory that has been set up to be preserved during the copy operation.
+ *
+ * This function temporarily enables MMU if kernel relocation is needed.
+ * Also, if we enter this function at EL2 on non-VHE kernel, we temporarily go
+ * to EL1 to enable MMU, and escalate back to EL2 at the end to do the jump to
+ * the new kernel. This is determined by presence of el2_vector.
  */
 SYM_CODE_START(arm64_relocate_new_kernel)
-   /* Check if the new image needs relocation. */
-   ldr x16, [x0, #KEXEC_KRELOC_HEAD]   /* x16 = kimage_head */
-   tbnzx16, IND_DONE_BIT, .Ldone
-   raw_dcache_line_size x15, x1/* x15 = dcache line size */
-.Lloop:
-   and x12, x16, PAGE_MASK /* x12 = addr */
-
-   /* Test the entry flags. */
-.Ltest_source:
-   tbz x16, IND_SOURCE_BIT, .Ltest_indirection
-
-   /* Invalidate dest page to PoC. */
-   mov x2, x13
-   add x20, x2, #PAGE_SIZE
-   sub x1, x15, #1
-   bic x2, x2, x1
-2: dc  ivac, x2
-   add x2, x2, x15
-   cmp x2, x20
-   b.lo2b
-   dsb sy
-
-   copy_page x13, x12, x1, x2, x3, x4, x5, x6, x7, x8
-   b   .Lnext
-.Ltest_indirection:
-   tbz x16, IND_INDIRECTION_BIT, .Ltest_destination
-   mov x14, x12/* ptr = addr */
-   b   .Lnext
-.Ltest_destination:
-   tbz x16, IND_DESTINATION_BIT, .Lnext
-   mov x13, x12/* dest = addr */
-.Lnext:
-   ldr x16, [x14], #8  /* entry = *ptr++ */
-   tbz x16, IND_DONE_BIT, .Lloop   /* while (!(entry & DONE)) */
-.Ldone:
-   /* wait for writes from copy_page to finish */
-   dsb nsh
-   ic  iallu
-   dsb nsh
-   isb
-
-   /* Start new image. */
-   ldr x4, [x0, #KEXEC_KRELOC_ENTRY_ADDR]  /* x4 = kimage_start */
+   mov x20, xzr/* x20 will hold vector value */
+   ldr x11, [x0, #KEXEC_KRELOC_COPY_LEN]
+   cbz x11, 5f /* Check if need to relocate */
+   ldr x20, [x0, #KEXEC_KRELOC_EL2_VECTOR]
+   cbz x20, 2f /* need to reduce to EL1? */
+   msr vbar_el2, x20   /* el2_vector present, means */
+   adr x1, 2f   

[PATCH v10 18/18] arm64: kexec: remove head from relocation argument

2021-01-25 Thread Pavel Tatashin
Now, that relocation is done using virtual addresses, reloc_arg->head
is not needed anymore.

Signed-off-by: Pavel Tatashin 
---
 arch/arm64/include/asm/kexec.h| 2 --
 arch/arm64/kernel/asm-offsets.c   | 1 -
 arch/arm64/kernel/machine_kexec.c | 1 -
 3 files changed, 4 deletions(-)

diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
index 049cde429b1b..2fa4109bd582 100644
--- a/arch/arm64/include/asm/kexec.h
+++ b/arch/arm64/include/asm/kexec.h
@@ -97,7 +97,6 @@ extern const char arm64_kexec_el2_vectors[];
 
 /*
  * kern_reloc_arg is passed to kernel relocation function as an argument.
- * headkimage->head, allows to traverse through relocation 
segments.
  * entry_addr  kimage->start, where to jump from relocation function (new
  * kernel, or purgatory entry address).
  * kern_arg0   first argument to kernel is its dtb address. The other
@@ -113,7 +112,6 @@ extern const char arm64_kexec_el2_vectors[];
  * copy_lenNumber of bytes that need to be copied
  */
 struct kern_reloc_arg {
-   phys_addr_t head;
phys_addr_t entry_addr;
phys_addr_t kern_arg0;
phys_addr_t kern_arg1;
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 06278611451d..94f050ad6471 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -153,7 +153,6 @@ int main(void)
   BLANK();
 #endif
 #ifdef CONFIG_KEXEC_CORE
-  DEFINE(KEXEC_KRELOC_HEAD,offsetof(struct kern_reloc_arg, head));
   DEFINE(KEXEC_KRELOC_ENTRY_ADDR,  offsetof(struct kern_reloc_arg, 
entry_addr));
   DEFINE(KEXEC_KRELOC_KERN_ARG0,   offsetof(struct kern_reloc_arg, 
kern_arg0));
   DEFINE(KEXEC_KRELOC_KERN_ARG1,   offsetof(struct kern_reloc_arg, 
kern_arg1));
diff --git a/arch/arm64/kernel/machine_kexec.c 
b/arch/arm64/kernel/machine_kexec.c
index dc1b7e5a54fb..c2dff232a85b 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -168,7 +168,6 @@ int machine_kexec_post_load(struct kimage *kimage)
memcpy(reloc_code, __relocate_new_kernel_start, reloc_size);
kimage->arch.kern_reloc = __pa(reloc_code) + func_offset;
kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg);
-   kern_reloc_arg->head = kimage->head;
kern_reloc_arg->entry_addr = kimage->start;
kern_reloc_arg->kern_arg0 = kimage->arch.dtb_mem;
 
-- 
2.25.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH v10 11/18] arm64: kexec: arm64_relocate_new_kernel clean-ups and optimizations

2021-01-25 Thread Pavel Tatashin
In preparation to bigger changes to arm64_relocate_new_kernel that would
enable this function to do MMU backed memory copy, do few clean-ups and
optimizations. These include:

1. Call raw_dcache_line_size()  only when relocation is actually going to
   happen. i.e. kdump type kexec, does not need it.

2.  copy_page(dest, src, tmps...) increments dest and src by PAGE_SIZE, so
no need to store dest prior to calling copy_page and increment it
after. Also, src is not used after a copy, not need to copy either.

3. For consistency use comment on the same line with instruction when it
   describes the instruction itself.

4. Some comment corrections

Signed-off-by: Pavel Tatashin 
---
 arch/arm64/kernel/relocate_kernel.S | 36 +++--
 1 file changed, 8 insertions(+), 28 deletions(-)

diff --git a/arch/arm64/kernel/relocate_kernel.S 
b/arch/arm64/kernel/relocate_kernel.S
index 84eec95ec06c..462ffbc37071 100644
--- a/arch/arm64/kernel/relocate_kernel.S
+++ b/arch/arm64/kernel/relocate_kernel.S
@@ -17,28 +17,24 @@
 /*
  * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it.
  *
- * The memory that the old kernel occupies may be overwritten when coping the
+ * The memory that the old kernel occupies may be overwritten when copying the
  * new image to its final location.  To assure that the
  * arm64_relocate_new_kernel routine which does that copy is not overwritten,
  * all code and data needed by arm64_relocate_new_kernel must be between the
  * symbols arm64_relocate_new_kernel and arm64_relocate_new_kernel_end.  The
  * machine_kexec() routine will copy arm64_relocate_new_kernel to the kexec
- * control_code_page, a special page which has been set up to be preserved
- * during the copy operation.
+ * safe memory that has been set up to be preserved during the copy operation.
  */
 SYM_CODE_START(arm64_relocate_new_kernel)
-
/* Setup the list loop variables. */
mov x18, x2 /* x18 = dtb address */
mov x17, x1 /* x17 = kimage_start */
mov x16, x0 /* x16 = kimage_head */
-   raw_dcache_line_size x15, x0/* x15 = dcache line size */
mov x14, xzr/* x14 = entry ptr */
mov x13, xzr/* x13 = copy dest */
-
/* Check if the new image needs relocation. */
tbnzx16, IND_DONE_BIT, .Ldone
-
+   raw_dcache_line_size x15, x0/* x15 = dcache line size */
 .Lloop:
and x12, x16, PAGE_MASK /* x12 = addr */
 
@@ -57,34 +53,18 @@ SYM_CODE_START(arm64_relocate_new_kernel)
b.lo2b
dsb sy
 
-   mov x20, x13
-   mov x21, x12
-   copy_page x20, x21, x0, x1, x2, x3, x4, x5, x6, x7
-
-   /* dest += PAGE_SIZE */
-   add x13, x13, PAGE_SIZE
+   copy_page x13, x12, x0, x1, x2, x3, x4, x5, x6, x7
b   .Lnext
-
 .Ltest_indirection:
tbz x16, IND_INDIRECTION_BIT, .Ltest_destination
-
-   /* ptr = addr */
-   mov x14, x12
+   mov x14, x12/* ptr = addr */
b   .Lnext
-
 .Ltest_destination:
tbz x16, IND_DESTINATION_BIT, .Lnext
-
-   /* dest = addr */
-   mov x13, x12
-
+   mov x13, x12/* dest = addr */
 .Lnext:
-   /* entry = *ptr++ */
-   ldr x16, [x14], #8
-
-   /* while (!(entry & DONE)) */
-   tbz x16, IND_DONE_BIT, .Lloop
-
+   ldr x16, [x14], #8  /* entry = *ptr++ */
+   tbz x16, IND_DONE_BIT, .Lloop   /* while (!(entry & DONE)) */
 .Ldone:
/* wait for writes from copy_page to finish */
dsb nsh
-- 
2.25.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH v10 16/18] arm64: kexec: configure trans_pgd page table for kexec

2021-01-25 Thread Pavel Tatashin
Configure a page table located in kexec-safe memory that has
the following mappings:

1. identity mapping for text of relocation function with executable
   permission.
2. va mappings for all source ranges
3. va mappings for all destination ranges.

Signed-off-by: Pavel Tatashin 
---
 arch/arm64/include/asm/kexec.h| 12 
 arch/arm64/kernel/asm-offsets.c   |  6 ++
 arch/arm64/kernel/machine_kexec.c | 91 ++-
 3 files changed, 108 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
index b96d8a6aac80..049cde429b1b 100644
--- a/arch/arm64/include/asm/kexec.h
+++ b/arch/arm64/include/asm/kexec.h
@@ -105,6 +105,12 @@ extern const char arm64_kexec_el2_vectors[];
  * el2_vector  If present means that relocation routine will go to EL1
  * from EL2 to do the copy, and then back to EL2 to do the jump
  * to new world.
+ * trans_ttbr0 idmap for relocation function and its argument
+ * trans_ttbr1 map for source/destination addresses.
+ * trans_t0sz  t0sz for idmap page in trans_ttbr0
+ * src_addrstart address for source pages.
+ * dst_addrstart address for destination pages.
+ * copy_lenNumber of bytes that need to be copied
  */
 struct kern_reloc_arg {
phys_addr_t head;
@@ -114,6 +120,12 @@ struct kern_reloc_arg {
phys_addr_t kern_arg2;
phys_addr_t kern_arg3;
phys_addr_t el2_vector;
+   phys_addr_t trans_ttbr0;
+   phys_addr_t trans_ttbr1;
+   unsigned long trans_t0sz;
+   unsigned long src_addr;
+   unsigned long dst_addr;
+   unsigned long copy_len;
 };
 
 #define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 8a9475be1b62..06278611451d 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -160,6 +160,12 @@ int main(void)
   DEFINE(KEXEC_KRELOC_KERN_ARG2,   offsetof(struct kern_reloc_arg, 
kern_arg2));
   DEFINE(KEXEC_KRELOC_KERN_ARG3,   offsetof(struct kern_reloc_arg, 
kern_arg3));
   DEFINE(KEXEC_KRELOC_EL2_VECTOR,  offsetof(struct kern_reloc_arg, 
el2_vector));
+  DEFINE(KEXEC_KRELOC_TRANS_TTBR0, offsetof(struct kern_reloc_arg, 
trans_ttbr0));
+  DEFINE(KEXEC_KRELOC_TRANS_TTBR1, offsetof(struct kern_reloc_arg, 
trans_ttbr1));
+  DEFINE(KEXEC_KRELOC_TRANS_T0SZ,  offsetof(struct kern_reloc_arg, 
trans_t0sz));
+  DEFINE(KEXEC_KRELOC_SRC_ADDR,offsetof(struct kern_reloc_arg, 
src_addr));
+  DEFINE(KEXEC_KRELOC_DST_ADDR,offsetof(struct kern_reloc_arg, 
dst_addr));
+  DEFINE(KEXEC_KRELOC_COPY_LEN,offsetof(struct kern_reloc_arg, 
copy_len));
 #endif
   return 0;
 }
diff --git a/arch/arm64/kernel/machine_kexec.c 
b/arch/arm64/kernel/machine_kexec.c
index 41d1e3ca13f8..dc1b7e5a54fb 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "cpu-reset.h"
 
@@ -71,11 +72,91 @@ static void *kexec_page_alloc(void *arg)
return page_address(page);
 }
 
+/*
+ * Map source segments starting from src_va, and map destination
+ * segments starting from dst_va, and return size of copy in
+ * *copy_len argument.
+ * Relocation function essentially needs to do:
+ * memcpy(dst_va, src_va, copy_len);
+ */
+static int map_segments(struct kimage *kimage, pgd_t *pgdp,
+   struct trans_pgd_info *info,
+   unsigned long src_va,
+   unsigned long dst_va,
+   unsigned long *copy_len)
+{
+   unsigned long *ptr = 0;
+   unsigned long dest = 0;
+   unsigned long len = 0;
+   unsigned long entry, addr;
+   int rc;
+
+   for (entry = kimage->head; !(entry & IND_DONE); entry = *ptr++) {
+   addr = entry & PAGE_MASK;
+
+   switch (entry & IND_FLAGS) {
+   case IND_DESTINATION:
+   dest = addr;
+   break;
+   case IND_INDIRECTION:
+   ptr = __va(addr);
+   if (rc)
+   return rc;
+   break;
+   case IND_SOURCE:
+   rc = trans_pgd_map_page(info, pgdp, __va(addr),
+   src_va, PAGE_KERNEL);
+   if (rc)
+   return rc;
+   rc = trans_pgd_map_page(info, pgdp, __va(dest),
+   dst_va, PAGE_KERNEL);
+   if (rc)
+   return rc;
+   dest += PAGE_SIZE;
+   src_va += PAGE_SIZE;
+   dst_va += PAGE_SIZE;
+   len += PAGE_SIZE;
+   }
+   }
+   *copy_len = len;
+
+   return 0;
+}
+
+static int mmu_relocate_setup(struct kimage *kimage, void 

[PATCH v10 14/18] arm64: kexec: use ld script for relocation function

2021-01-25 Thread Pavel Tatashin
Currently, relocation code declares start and end variables
which are used to compute it size.

The better way to do this is to use ld script incited, and put relocation
function in its own section.

Soon, relocation function will share the same page with EL2 vectors. So,
proper marking is needed.

Signed-off-by: Pavel Tatashin 
---
 arch/arm64/include/asm/kexec.h  |  4 
 arch/arm64/include/asm/sections.h   |  1 +
 arch/arm64/kernel/machine_kexec.c   | 17 -
 arch/arm64/kernel/relocate_kernel.S | 15 ++-
 arch/arm64/kernel/vmlinux.lds.S | 19 +++
 5 files changed, 34 insertions(+), 22 deletions(-)

diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
index 990185744148..7f4f9abdf049 100644
--- a/arch/arm64/include/asm/kexec.h
+++ b/arch/arm64/include/asm/kexec.h
@@ -90,6 +90,10 @@ static inline void crash_prepare_suspend(void) {}
 static inline void crash_post_resume(void) {}
 #endif
 
+#if defined(CONFIG_KEXEC_CORE)
+extern const char arm64_relocate_new_kernel[];
+#endif
+
 /*
  * kern_reloc_arg is passed to kernel relocation function as an argument.
  * headkimage->head, allows to traverse through relocation 
segments.
diff --git a/arch/arm64/include/asm/sections.h 
b/arch/arm64/include/asm/sections.h
index 8ff579361731..ae873eb22205 100644
--- a/arch/arm64/include/asm/sections.h
+++ b/arch/arm64/include/asm/sections.h
@@ -19,5 +19,6 @@ extern char __exittext_begin[], __exittext_end[];
 extern char __irqentry_text_start[], __irqentry_text_end[];
 extern char __mmuoff_data_start[], __mmuoff_data_end[];
 extern char __entry_tramp_text_start[], __entry_tramp_text_end[];
+extern char __relocate_new_kernel_start[], __relocate_new_kernel_end[];
 
 #endif /* __ASM_SECTIONS_H */
diff --git a/arch/arm64/kernel/machine_kexec.c 
b/arch/arm64/kernel/machine_kexec.c
index 679db3f1e0c5..361a4d082093 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -20,13 +20,10 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "cpu-reset.h"
 
-/* Global variables for the arm64_relocate_new_kernel routine. */
-extern const unsigned char arm64_relocate_new_kernel[];
-extern const unsigned long arm64_relocate_new_kernel_size;
-
 /**
  * kexec_image_info - For debugging output.
  */
@@ -78,13 +75,15 @@ int machine_kexec_post_load(struct kimage *kimage)
 {
void *reloc_code = page_to_virt(kimage->control_code_page);
struct kern_reloc_arg *kern_reloc_arg = kexec_page_alloc(kimage);
+   long func_offset, reloc_size;
 
if (!kern_reloc_arg)
return -ENOMEM;
 
-   memcpy(reloc_code, arm64_relocate_new_kernel,
-  arm64_relocate_new_kernel_size);
-   kimage->arch.kern_reloc = __pa(reloc_code);
+   func_offset = arm64_relocate_new_kernel - __relocate_new_kernel_start;
+   reloc_size = __relocate_new_kernel_end - __relocate_new_kernel_start;
+   memcpy(reloc_code, __relocate_new_kernel_start, reloc_size);
+   kimage->arch.kern_reloc = __pa(reloc_code) + func_offset;
kimage->arch.kern_reloc_arg = __pa(kern_reloc_arg);
kern_reloc_arg->head = kimage->head;
kern_reloc_arg->entry_addr = kimage->start;
@@ -92,9 +91,9 @@ int machine_kexec_post_load(struct kimage *kimage)
kexec_image_info(kimage);
 
/* Flush the reloc_code in preparation for its execution. */
-   __flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size);
+   __flush_dcache_area(reloc_code, reloc_size);
flush_icache_range((uintptr_t)reloc_code, (uintptr_t)reloc_code +
-  arm64_relocate_new_kernel_size);
+  reloc_size);
__flush_dcache_area(kern_reloc_arg, sizeof(struct kern_reloc_arg));
 
return 0;
diff --git a/arch/arm64/kernel/relocate_kernel.S 
b/arch/arm64/kernel/relocate_kernel.S
index c92228aeddca..d2a4a0b0d76b 100644
--- a/arch/arm64/kernel/relocate_kernel.S
+++ b/arch/arm64/kernel/relocate_kernel.S
@@ -14,6 +14,7 @@
 #include 
 #include 
 
+.pushsection".kexec_relocate.text", "ax"
 /*
  * arm64_relocate_new_kernel - Put a 2nd stage image in place and boot it.
  *
@@ -75,16 +76,4 @@ SYM_CODE_START(arm64_relocate_new_kernel)
ldr x0, [x0, #KEXEC_KRELOC_KERN_ARG0]   /* x0 = dtb address */
br  x4
 SYM_CODE_END(arm64_relocate_new_kernel)
-
-.align 3   /* To keep the 64-bit values below naturally aligned. */
-
-.Lcopy_end:
-.org   KEXEC_CONTROL_PAGE_SIZE
-
-/*
- * arm64_relocate_new_kernel_size - Number of bytes to copy to the
- * control_code_page.
- */
-.globl arm64_relocate_new_kernel_size
-arm64_relocate_new_kernel_size:
-   .quad   .Lcopy_end - arm64_relocate_new_kernel
+.popsection
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 4c0b0c89ad59..33b0d3c9fd3b 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -12,6 +12,7 @@
 #include 
 #i

[PATCH v10 08/18] arm64: trans_pgd: hibernate: idmap the single page that holds the copy page routines

2021-01-25 Thread Pavel Tatashin
From: James Morse 

To resume from hibernate, the contents of memory are restored from
the swap image. This may overwrite any page, including the running
kernel and its page tables.

Hibernate copies the code it uses to do the restore into a single
page that it knows won't be overwritten, and maps it with page tables
built from pages that won't be overwritten.

Today the address it uses for this mapping is arbitrary, but to allow
kexec to reuse this code, it needs to be idmapped. To idmap the page
we must avoid the kernel helpers that have VA_BITS baked in.

Convert create_single_mapping() to take a single PA, and idmap it.
The page tables are built in the reverse order to normal using
pfn_pte() to stir in any bits between 52:48. T0SZ is always increased
to cover 48bits, or 52 if the copy code has bits 52:48 in its PA.

Signed-off-by: James Morse 

[Adopted the original patch from James to trans_pgd interface, so it can be
commonly used by both Kexec and Hibernate. Some minor clean-ups.]

Signed-off-by: Pavel Tatashin 
Link: 
https://lore.kernel.org/linux-arm-kernel/20200115143322.214247-4-james.mo...@arm.com/
---
 arch/arm64/include/asm/trans_pgd.h |  3 ++
 arch/arm64/kernel/hibernate.c  | 32 +++
 arch/arm64/mm/trans_pgd.c  | 49 ++
 3 files changed, 63 insertions(+), 21 deletions(-)

diff --git a/arch/arm64/include/asm/trans_pgd.h 
b/arch/arm64/include/asm/trans_pgd.h
index 7fbf6a3ccff7..5d08e5adf3d5 100644
--- a/arch/arm64/include/asm/trans_pgd.h
+++ b/arch/arm64/include/asm/trans_pgd.h
@@ -33,4 +33,7 @@ int trans_pgd_create_copy(struct trans_pgd_info *info, pgd_t 
**trans_pgd,
 int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd,
   void *page, unsigned long dst_addr, pgprot_t pgprot);
 
+int trans_pgd_idmap_page(struct trans_pgd_info *info, phys_addr_t *trans_ttbr0,
+unsigned long *t0sz, void *page);
+
 #endif /* _ASM_TRANS_TABLE_H */
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 94fc275cdd21..9df32ba0d574 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -194,7 +194,6 @@ static void *hibernate_page_alloc(void *arg)
  * page system.
  */
 static int create_safe_exec_page(void *src_start, size_t length,
-unsigned long dst_addr,
 phys_addr_t *phys_dst_addr)
 {
struct trans_pgd_info trans_info = {
@@ -203,7 +202,8 @@ static int create_safe_exec_page(void *src_start, size_t 
length,
};
 
void *page = (void *)get_safe_page(GFP_ATOMIC);
-   pgd_t *trans_pgd;
+   phys_addr_t trans_ttbr0;
+   unsigned long t0sz;
int rc;
 
if (!page)
@@ -211,13 +211,7 @@ static int create_safe_exec_page(void *src_start, size_t 
length,
 
memcpy(page, src_start, length);
__flush_icache_range((unsigned long)page, (unsigned long)page + length);
-
-   trans_pgd = (void *)get_safe_page(GFP_ATOMIC);
-   if (!trans_pgd)
-   return -ENOMEM;
-
-   rc = trans_pgd_map_page(&trans_info, trans_pgd, page, dst_addr,
-   PAGE_KERNEL_EXEC);
+   rc = trans_pgd_idmap_page(&trans_info, &trans_ttbr0, &t0sz, page);
if (rc)
return rc;
 
@@ -230,12 +224,15 @@ static int create_safe_exec_page(void *src_start, size_t 
length,
 * page, but TLBs may contain stale ASID-tagged entries (e.g. for EFI
 * runtime services), while for a userspace-driven test_resume cycle it
 * points to userspace page tables (and we must point it at a zero page
-* ourselves). Elsewhere we only (un)install the idmap with preemption
-* disabled, so T0SZ should be as required regardless.
+* ourselves).
+*
+* We change T0SZ as part of installing the idmap. This is undone by
+* cpu_uninstall_idmap() in __cpu_suspend_exit().
 */
cpu_set_reserved_ttbr0();
local_flush_tlb_all();
-   write_sysreg(phys_to_ttbr(virt_to_phys(trans_pgd)), ttbr0_el1);
+   __cpu_set_tcr_t0sz(t0sz);
+   write_sysreg(trans_ttbr0, ttbr0_el1);
isb();
 
*phys_dst_addr = virt_to_phys(page);
@@ -434,7 +431,6 @@ int swsusp_arch_resume(void)
void *zero_page;
size_t exit_size;
pgd_t *tmp_pg_dir;
-   phys_addr_t phys_hibernate_exit;
void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *,
  void *, phys_addr_t, phys_addr_t);
struct trans_pgd_info trans_info = {
@@ -462,19 +458,13 @@ int swsusp_arch_resume(void)
return -ENOMEM;
}
 
-   /*
-* Locate the exit code in the bottom-but-one page, so that *NULL
-* still has disastrous affects.
-*/
-   hibernate_exit = (void *)PAGE_SIZE;
exit_size = __hibernate_exit_text_end - __hibernate_exit_text_start;
/

[PATCH v10 05/18] arm64: trans_pgd: pass allocator trans_pgd_create_copy

2021-01-25 Thread Pavel Tatashin
Make trans_pgd_create_copy and its subroutines to use allocator that is
passed as an argument

Signed-off-by: Pavel Tatashin 
Reviewed-by: James Morse 
---
 arch/arm64/include/asm/trans_pgd.h |  4 +--
 arch/arm64/kernel/hibernate.c  |  7 -
 arch/arm64/mm/trans_pgd.c  | 49 ++
 3 files changed, 38 insertions(+), 22 deletions(-)

diff --git a/arch/arm64/include/asm/trans_pgd.h 
b/arch/arm64/include/asm/trans_pgd.h
index b46409b25234..7fbf6a3ccff7 100644
--- a/arch/arm64/include/asm/trans_pgd.h
+++ b/arch/arm64/include/asm/trans_pgd.h
@@ -27,8 +27,8 @@ struct trans_pgd_info {
void *trans_alloc_arg;
 };
 
-int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start,
- unsigned long end);
+int trans_pgd_create_copy(struct trans_pgd_info *info, pgd_t **trans_pgd,
+ unsigned long start, unsigned long end);
 
 int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd,
   void *page, unsigned long dst_addr, pgprot_t pgprot);
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index c173f280bfea..94fc275cdd21 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -437,13 +437,18 @@ int swsusp_arch_resume(void)
phys_addr_t phys_hibernate_exit;
void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *,
  void *, phys_addr_t, phys_addr_t);
+   struct trans_pgd_info trans_info = {
+   .trans_alloc_page   = hibernate_page_alloc,
+   .trans_alloc_arg= (void *)GFP_ATOMIC,
+   };
 
/*
 * Restoring the memory image will overwrite the ttbr1 page tables.
 * Create a second copy of just the linear map, and use this when
 * restoring.
 */
-   rc = trans_pgd_create_copy(&tmp_pg_dir, PAGE_OFFSET, PAGE_END);
+   rc = trans_pgd_create_copy(&trans_info, &tmp_pg_dir, PAGE_OFFSET,
+  PAGE_END);
if (rc)
return rc;
 
diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c
index f28eceba2242..47b6b7029907 100644
--- a/arch/arm64/mm/trans_pgd.c
+++ b/arch/arm64/mm/trans_pgd.c
@@ -57,14 +57,14 @@ static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, 
unsigned long addr)
}
 }
 
-static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, unsigned long start,
-   unsigned long end)
+static int copy_pte(struct trans_pgd_info *info, pmd_t *dst_pmdp,
+   pmd_t *src_pmdp, unsigned long start, unsigned long end)
 {
pte_t *src_ptep;
pte_t *dst_ptep;
unsigned long addr = start;
 
-   dst_ptep = (pte_t *)get_safe_page(GFP_ATOMIC);
+   dst_ptep = trans_alloc(info);
if (!dst_ptep)
return -ENOMEM;
pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep);
@@ -78,8 +78,8 @@ static int copy_pte(pmd_t *dst_pmdp, pmd_t *src_pmdp, 
unsigned long start,
return 0;
 }
 
-static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, unsigned long start,
-   unsigned long end)
+static int copy_pmd(struct trans_pgd_info *info, pud_t *dst_pudp,
+   pud_t *src_pudp, unsigned long start, unsigned long end)
 {
pmd_t *src_pmdp;
pmd_t *dst_pmdp;
@@ -87,7 +87,7 @@ static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, 
unsigned long start,
unsigned long addr = start;
 
if (pud_none(READ_ONCE(*dst_pudp))) {
-   dst_pmdp = (pmd_t *)get_safe_page(GFP_ATOMIC);
+   dst_pmdp = trans_alloc(info);
if (!dst_pmdp)
return -ENOMEM;
pud_populate(&init_mm, dst_pudp, dst_pmdp);
@@ -102,7 +102,7 @@ static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, 
unsigned long start,
if (pmd_none(pmd))
continue;
if (pmd_table(pmd)) {
-   if (copy_pte(dst_pmdp, src_pmdp, addr, next))
+   if (copy_pte(info, dst_pmdp, src_pmdp, addr, next))
return -ENOMEM;
} else {
set_pmd(dst_pmdp,
@@ -113,7 +113,8 @@ static int copy_pmd(pud_t *dst_pudp, pud_t *src_pudp, 
unsigned long start,
return 0;
 }
 
-static int copy_pud(p4d_t *dst_p4dp, p4d_t *src_p4dp, unsigned long start,
+static int copy_pud(struct trans_pgd_info *info, p4d_t *dst_p4dp,
+   p4d_t *src_p4dp, unsigned long start,
unsigned long end)
 {
pud_t *dst_pudp;
@@ -122,7 +123,7 @@ static int copy_pud(p4d_t *dst_p4dp, p4d_t *src_p4dp, 
unsigned long start,
unsigned long addr = start;
 
if (p4d_none(READ_ONCE(*dst_p4dp))) {
-   dst_pudp = (pud_t *)get_safe_page(GFP_ATOMIC);
+   dst_pudp = trans_alloc(info);
if (!dst_pudp)
retur

[PATCH v10 01/18] arm64: kexec: make dtb_mem always enabled

2021-01-25 Thread Pavel Tatashin
Currently, dtb_mem is enabled only when CONFIG_KEXEC_FILE is
enabled. This adds ugly ifdefs to c files.

Always enabled dtb_mem, when it is not used, it is NULL.
Change the dtb_mem to phys_addr_t, as it is a physical address.

Signed-off-by: Pavel Tatashin 
Reviewed-by: James Morse 
---
 arch/arm64/include/asm/kexec.h| 4 ++--
 arch/arm64/kernel/machine_kexec.c | 6 +-
 2 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
index d24b527e8c00..61530ec3a9b1 100644
--- a/arch/arm64/include/asm/kexec.h
+++ b/arch/arm64/include/asm/kexec.h
@@ -90,18 +90,18 @@ static inline void crash_prepare_suspend(void) {}
 static inline void crash_post_resume(void) {}
 #endif
 
-#ifdef CONFIG_KEXEC_FILE
 #define ARCH_HAS_KIMAGE_ARCH
 
 struct kimage_arch {
void *dtb;
-   unsigned long dtb_mem;
+   phys_addr_t dtb_mem;
/* Core ELF header buffer */
void *elf_headers;
unsigned long elf_headers_mem;
unsigned long elf_headers_sz;
 };
 
+#ifdef CONFIG_KEXEC_FILE
 extern const struct kexec_file_ops kexec_image_ops;
 
 struct kimage;
diff --git a/arch/arm64/kernel/machine_kexec.c 
b/arch/arm64/kernel/machine_kexec.c
index a0b144cfaea7..8096a6aa1d49 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -204,11 +204,7 @@ void machine_kexec(struct kimage *kimage)
 * In kexec_file case, the kernel starts directly without purgatory.
 */
cpu_soft_restart(reboot_code_buffer_phys, kimage->head, kimage->start,
-#ifdef CONFIG_KEXEC_FILE
-   kimage->arch.dtb_mem);
-#else
-   0);
-#endif
+kimage->arch.dtb_mem);
 
BUG(); /* Should never get here. */
 }
-- 
2.25.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH v10 06/18] arm64: trans_pgd: pass NULL instead of init_mm to *_populate functions

2021-01-25 Thread Pavel Tatashin
trans_pgd_* should be independent from mm context because the tables that
are created by this code are used when there are no mm context around, as
it is between kernels. Simply replace mm_init's with NULL.

Signed-off-by: Pavel Tatashin 
Acked-by: James Morse 
---
 arch/arm64/mm/trans_pgd.c | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c
index 47b6b7029907..ded8e2ba0308 100644
--- a/arch/arm64/mm/trans_pgd.c
+++ b/arch/arm64/mm/trans_pgd.c
@@ -67,7 +67,7 @@ static int copy_pte(struct trans_pgd_info *info, pmd_t 
*dst_pmdp,
dst_ptep = trans_alloc(info);
if (!dst_ptep)
return -ENOMEM;
-   pmd_populate_kernel(&init_mm, dst_pmdp, dst_ptep);
+   pmd_populate_kernel(NULL, dst_pmdp, dst_ptep);
dst_ptep = pte_offset_kernel(dst_pmdp, start);
 
src_ptep = pte_offset_kernel(src_pmdp, start);
@@ -90,7 +90,7 @@ static int copy_pmd(struct trans_pgd_info *info, pud_t 
*dst_pudp,
dst_pmdp = trans_alloc(info);
if (!dst_pmdp)
return -ENOMEM;
-   pud_populate(&init_mm, dst_pudp, dst_pmdp);
+   pud_populate(NULL, dst_pudp, dst_pmdp);
}
dst_pmdp = pmd_offset(dst_pudp, start);
 
@@ -126,7 +126,7 @@ static int copy_pud(struct trans_pgd_info *info, p4d_t 
*dst_p4dp,
dst_pudp = trans_alloc(info);
if (!dst_pudp)
return -ENOMEM;
-   p4d_populate(&init_mm, dst_p4dp, dst_pudp);
+   p4d_populate(NULL, dst_p4dp, dst_pudp);
}
dst_pudp = pud_offset(dst_p4dp, start);
 
@@ -241,7 +241,7 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t 
*trans_pgd,
p4dp = trans_alloc(info);
if (!pgdp)
return -ENOMEM;
-   pgd_populate(&init_mm, pgdp, p4dp);
+   pgd_populate(NULL, pgdp, p4dp);
}
 
p4dp = p4d_offset(pgdp, dst_addr);
@@ -249,7 +249,7 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t 
*trans_pgd,
pudp = trans_alloc(info);
if (!pudp)
return -ENOMEM;
-   p4d_populate(&init_mm, p4dp, pudp);
+   p4d_populate(NULL, p4dp, pudp);
}
 
pudp = pud_offset(p4dp, dst_addr);
@@ -257,7 +257,7 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t 
*trans_pgd,
pmdp = trans_alloc(info);
if (!pmdp)
return -ENOMEM;
-   pud_populate(&init_mm, pudp, pmdp);
+   pud_populate(NULL, pudp, pmdp);
}
 
pmdp = pmd_offset(pudp, dst_addr);
@@ -265,7 +265,7 @@ int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t 
*trans_pgd,
ptep = trans_alloc(info);
if (!ptep)
return -ENOMEM;
-   pmd_populate_kernel(&init_mm, pmdp, ptep);
+   pmd_populate_kernel(NULL, pmdp, ptep);
}
 
ptep = pte_offset_kernel(pmdp, dst_addr);
-- 
2.25.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH v10 09/18] arm64: kexec: move relocation function setup

2021-01-25 Thread Pavel Tatashin
Currently, kernel relocation function is configured in machine_kexec()
at the time of kexec reboot by using control_code_page.

This operation, however, is more logical to be done during kexec_load,
and thus remove from reboot time. Move, setup of this function to
newly added machine_kexec_post_load().

Because once MMU is enabled, kexec control page will contain more than
relocation kernel, but also vector table, add pointer to the actual
function within this page arch.kern_reloc. Currently, it equals to the
beginning of page, we will add offsets later, when vector table is
added.

Signed-off-by: Pavel Tatashin 
Reviewed-by: James Morse 
---
 arch/arm64/include/asm/kexec.h|  1 +
 arch/arm64/kernel/machine_kexec.c | 46 +--
 2 files changed, 20 insertions(+), 27 deletions(-)

diff --git a/arch/arm64/include/asm/kexec.h b/arch/arm64/include/asm/kexec.h
index 61530ec3a9b1..9befcd87e9a8 100644
--- a/arch/arm64/include/asm/kexec.h
+++ b/arch/arm64/include/asm/kexec.h
@@ -95,6 +95,7 @@ static inline void crash_post_resume(void) {}
 struct kimage_arch {
void *dtb;
phys_addr_t dtb_mem;
+   phys_addr_t kern_reloc;
/* Core ELF header buffer */
void *elf_headers;
unsigned long elf_headers_mem;
diff --git a/arch/arm64/kernel/machine_kexec.c 
b/arch/arm64/kernel/machine_kexec.c
index 8096a6aa1d49..a8aaa6562429 100644
--- a/arch/arm64/kernel/machine_kexec.c
+++ b/arch/arm64/kernel/machine_kexec.c
@@ -42,6 +42,7 @@ static void _kexec_image_info(const char *func, int line,
pr_debug("start:   %lx\n", kimage->start);
pr_debug("head:%lx\n", kimage->head);
pr_debug("nr_segments: %lu\n", kimage->nr_segments);
+   pr_debug("kern_reloc: %pa\n", &kimage->arch.kern_reloc);
 
for (i = 0; i < kimage->nr_segments; i++) {
pr_debug("  segment[%lu]: %016lx - %016lx, 0x%lx bytes, %lu 
pages\n",
@@ -58,6 +59,22 @@ void machine_kexec_cleanup(struct kimage *kimage)
/* Empty routine needed to avoid build errors. */
 }
 
+int machine_kexec_post_load(struct kimage *kimage)
+{
+   void *reloc_code = page_to_virt(kimage->control_code_page);
+
+   memcpy(reloc_code, arm64_relocate_new_kernel,
+  arm64_relocate_new_kernel_size);
+   kimage->arch.kern_reloc = __pa(reloc_code);
+
+   /* Flush the reloc_code in preparation for its execution. */
+   __flush_dcache_area(reloc_code, arm64_relocate_new_kernel_size);
+   flush_icache_range((uintptr_t)reloc_code, (uintptr_t)reloc_code +
+  arm64_relocate_new_kernel_size);
+
+   return 0;
+}
+
 /**
  * machine_kexec_prepare - Prepare for a kexec reboot.
  *
@@ -143,8 +160,6 @@ static void kexec_segment_flush(const struct kimage *kimage)
  */
 void machine_kexec(struct kimage *kimage)
 {
-   phys_addr_t reboot_code_buffer_phys;
-   void *reboot_code_buffer;
bool in_kexec_crash = (kimage == kexec_crash_image);
bool stuck_cpus = cpus_are_stuck_in_kernel();
 
@@ -155,31 +170,8 @@ void machine_kexec(struct kimage *kimage)
WARN(in_kexec_crash && (stuck_cpus || smp_crash_stop_failed()),
"Some CPUs may be stale, kdump will be unreliable.\n");
 
-   reboot_code_buffer_phys = page_to_phys(kimage->control_code_page);
-   reboot_code_buffer = phys_to_virt(reboot_code_buffer_phys);
-
kexec_image_info(kimage);
 
-   /*
-* Copy arm64_relocate_new_kernel to the reboot_code_buffer for use
-* after the kernel is shut down.
-*/
-   memcpy(reboot_code_buffer, arm64_relocate_new_kernel,
-   arm64_relocate_new_kernel_size);
-
-   /* Flush the reboot_code_buffer in preparation for its execution. */
-   __flush_dcache_area(reboot_code_buffer, arm64_relocate_new_kernel_size);
-
-   /*
-* Although we've killed off the secondary CPUs, we don't update
-* the online mask if we're handling a crash kernel and consequently
-* need to avoid flush_icache_range(), which will attempt to IPI
-* the offline CPUs. Therefore, we must use the __* variant here.
-*/
-   __flush_icache_range((uintptr_t)reboot_code_buffer,
-(uintptr_t)reboot_code_buffer +
-arm64_relocate_new_kernel_size);
-
/* Flush the kimage list and its buffers. */
kexec_list_flush(kimage);
 
@@ -193,7 +185,7 @@ void machine_kexec(struct kimage *kimage)
 
/*
 * cpu_soft_restart will shutdown the MMU, disable data caches, then
-* transfer control to the reboot_code_buffer which contains a copy of
+* transfer control to the kern_reloc which contains a copy of
 * the arm64_relocate_new_kernel routine.  arm64_relocate_new_kernel
 * uses physical addressing to relocate the new image to its final
 * position and transfers control to the image entry point when the
@@ -203,7 

[PATCH v10 07/18] arm64: mm: Always update TCR_EL1 from __cpu_set_tcr_t0sz()

2021-01-25 Thread Pavel Tatashin
From: James Morse 

Because only the idmap sets a non-standard T0SZ, __cpu_set_tcr_t0sz()
can check for platforms that need to do this using
__cpu_uses_extended_idmap() before doing its work.

The idmap is only built with enough levels, (and T0SZ bits) to map
its single page.

To allow hibernate, and then kexec to idmap their single page copy
routines, __cpu_set_tcr_t0sz() needs to consider additional users,
who may need a different number of levels/T0SZ-bits to the idmap.
(i.e. VA_BITS may be enough for the idmap, but not hibernate/kexec)

Always read TCR_EL1, and check whether any work needs doing for
this request. __cpu_uses_extended_idmap() remains as it is used
by KVM, whose idmap is also part of the kernel image.

This mostly affects the cpuidle path, where we now get an extra
system register read .

CC: Lorenzo Pieralisi 
CC: Sudeep Holla 
Signed-off-by: James Morse 
Signed-off-by: Pavel Tatashin 
---
 arch/arm64/include/asm/mmu_context.h | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/include/asm/mmu_context.h 
b/arch/arm64/include/asm/mmu_context.h
index 0b3079fd28eb..70ce8c1d2b07 100644
--- a/arch/arm64/include/asm/mmu_context.h
+++ b/arch/arm64/include/asm/mmu_context.h
@@ -81,16 +81,15 @@ static inline bool __cpu_uses_extended_idmap_level(void)
 }
 
 /*
- * Set TCR.T0SZ to its default value (based on VA_BITS)
+ * Ensure TCR.T0SZ is set to the provided value.
  */
 static inline void __cpu_set_tcr_t0sz(unsigned long t0sz)
 {
-   unsigned long tcr;
+   unsigned long tcr = read_sysreg(tcr_el1);
 
-   if (!__cpu_uses_extended_idmap())
+   if ((tcr & TCR_T0SZ_MASK) >> TCR_T0SZ_OFFSET == t0sz)
return;
 
-   tcr = read_sysreg(tcr_el1);
tcr &= ~TCR_T0SZ_MASK;
tcr |= t0sz << TCR_T0SZ_OFFSET;
write_sysreg(tcr, tcr_el1);
-- 
2.25.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH v10 03/18] arm64: hibernate: move page handling function to new trans_pgd.c

2021-01-25 Thread Pavel Tatashin
Now, that we abstracted the required functions move them to a new home.
Later, we will generalize these function in order to be useful outside
of hibernation.

Signed-off-by: Pavel Tatashin 
Reviewed-by: James Morse 
---
 arch/arm64/Kconfig |   4 +
 arch/arm64/include/asm/trans_pgd.h |  21 +++
 arch/arm64/kernel/hibernate.c  | 228 +-
 arch/arm64/mm/Makefile |   1 +
 arch/arm64/mm/trans_pgd.c  | 250 +
 5 files changed, 277 insertions(+), 227 deletions(-)
 create mode 100644 arch/arm64/include/asm/trans_pgd.h
 create mode 100644 arch/arm64/mm/trans_pgd.c

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index f39568b28ec1..fc0ed9d6e011 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1132,6 +1132,10 @@ config CRASH_DUMP
 
  For more details see Documentation/admin-guide/kdump/kdump.rst
 
+config TRANS_TABLE
+   def_bool y
+   depends on HIBERNATION
+
 config XEN_DOM0
def_bool y
depends on XEN
diff --git a/arch/arm64/include/asm/trans_pgd.h 
b/arch/arm64/include/asm/trans_pgd.h
new file mode 100644
index ..23153c13d1ce
--- /dev/null
+++ b/arch/arm64/include/asm/trans_pgd.h
@@ -0,0 +1,21 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+/*
+ * Copyright (c) 2020, Microsoft Corporation.
+ * Pavel Tatashin 
+ */
+
+#ifndef _ASM_TRANS_TABLE_H
+#define _ASM_TRANS_TABLE_H
+
+#include 
+#include 
+#include 
+
+int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start,
+ unsigned long end);
+
+int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr,
+  pgprot_t pgprot);
+
+#endif /* _ASM_TRANS_TABLE_H */
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 0a54d81c90f9..4a38662f0d90 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -16,7 +16,6 @@
 #define pr_fmt(x) "hibernate: " x
 #include 
 #include 
-#include 
 #include 
 #include 
 #include 
@@ -31,13 +30,12 @@
 #include 
 #include 
 #include 
-#include 
-#include 
 #include 
 #include 
 #include 
 #include 
 #include 
+#include 
 #include 
 
 /*
@@ -178,54 +176,6 @@ int arch_hibernation_header_restore(void *addr)
 }
 EXPORT_SYMBOL(arch_hibernation_header_restore);
 
-static int trans_pgd_map_page(pgd_t *trans_pgd, void *page,
-  unsigned long dst_addr,
-  pgprot_t pgprot)
-{
-   pgd_t *pgdp;
-   p4d_t *p4dp;
-   pud_t *pudp;
-   pmd_t *pmdp;
-   pte_t *ptep;
-
-   pgdp = pgd_offset_pgd(trans_pgd, dst_addr);
-   if (pgd_none(READ_ONCE(*pgdp))) {
-   p4dp = (void *)get_safe_page(GFP_ATOMIC);
-   if (!pgdp)
-   return -ENOMEM;
-   pgd_populate(&init_mm, pgdp, p4dp);
-   }
-
-   p4dp = p4d_offset(pgdp, dst_addr);
-   if (p4d_none(READ_ONCE(*p4dp))) {
-   pudp = (void *)get_safe_page(GFP_ATOMIC);
-   if (!pudp)
-   return -ENOMEM;
-   p4d_populate(&init_mm, p4dp, pudp);
-   }
-
-   pudp = pud_offset(p4dp, dst_addr);
-   if (pud_none(READ_ONCE(*pudp))) {
-   pmdp = (void *)get_safe_page(GFP_ATOMIC);
-   if (!pmdp)
-   return -ENOMEM;
-   pud_populate(&init_mm, pudp, pmdp);
-   }
-
-   pmdp = pmd_offset(pudp, dst_addr);
-   if (pmd_none(READ_ONCE(*pmdp))) {
-   ptep = (void *)get_safe_page(GFP_ATOMIC);
-   if (!ptep)
-   return -ENOMEM;
-   pmd_populate_kernel(&init_mm, pmdp, ptep);
-   }
-
-   ptep = pte_offset_kernel(pmdp, dst_addr);
-   set_pte(ptep, pfn_pte(virt_to_pfn(page), PAGE_KERNEL_EXEC));
-
-   return 0;
-}
-
 /*
  * Copies length bytes, starting at src_start into an new page,
  * perform cache maintenance, then maps it at the specified address low
@@ -462,182 +412,6 @@ int swsusp_arch_suspend(void)
return ret;
 }
 
-static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr)
-{
-   pte_t pte = READ_ONCE(*src_ptep);
-
-   if (pte_valid(pte)) {
-   /*
-* Resume will overwrite areas that may be marked
-* read only (code, rodata). Clear the RDONLY bit from
-* the temporary mappings we use during restore.
-*/
-   set_pte(dst_ptep, pte_mkwrite(pte));
-   } else if (debug_pagealloc_enabled() && !pte_none(pte)) {
-   /*
-* debug_pagealloc will removed the PTE_VALID bit if
-* the page isn't in use by the resume kernel. It may have
-* been in use by the original kernel, in which case we need
-* to put it back in our copy to do the restore.
-*
-* Before marking this entry valid, check the pfn should
-* be mapped.
-

[PATCH v10 00/18] arm64: MMU enabled kexec relocation

2021-01-25 Thread Pavel Tatashin
Changelog:
v10:
- Addressed a lot of comments form James Morse and from  Marc Zyngier
- Added review-by's
- Synchronized with mainline

v9: - 9 patches from previous series landed in upstream, so now series
  is smaller
- Added two patches from James Morse to address idmap issues for 
machines
  with high physical addresses.
- Addressed comments from Selin Dag about compiling issues. He also 
tested
  my series and got similar performance results: ~60 ms instead of ~580 
ms
  with an initramfs size of ~120MB.
v8:
- Synced with mainline to keep series up-to-date
v7:
-- Addressed comments from James Morse
- arm64: hibernate: pass the allocated pgdp to ttbr0
  Removed "Fixes" tag, and added Added Reviewed-by: James Morse
- arm64: hibernate: check pgd table allocation
  Sent out as a standalone patch so it can be sent to stable
  Series applies on mainline + this patch
- arm64: hibernate: add trans_pgd public functions
  Remove second allocation of tmp_pg_dir in swsusp_arch_resume
  Added Reviewed-by: James Morse 
- arm64: kexec: move relocation function setup and clean up
  Fixed typo in commit log
  Changed kern_reloc to phys_addr_t types.
  Added explanation why kern_reloc is needed.
  Split into four patches:
  arm64: kexec: make dtb_mem always enabled
  arm64: kexec: remove unnecessary debug prints
  arm64: kexec: call kexec_image_info only once
  arm64: kexec: move relocation function setup
- arm64: kexec: add expandable argument to relocation function
  Changed types of new arguments from unsigned long to phys_addr_t.
  Changed offset prefix to KEXEC_*
  Split into four patches:
  arm64: kexec: cpu_soft_restart change argument types
  arm64: kexec: arm64_relocate_new_kernel clean-ups
  arm64: kexec: arm64_relocate_new_kernel don't use x0 as temp
  arm64: kexec: add expandable argument to relocation function
- arm64: kexec: configure trans_pgd page table for kexec
  Added invalid entries into EL2 vector table
  Removed KEXEC_EL2_VECTOR_TABLE_SIZE and KEXEC_EL2_VECTOR_TABLE_OFFSET
  Copy relocation functions and table into separate pages
  Changed types in kern_reloc_arg.
  Split into three patches:
  arm64: kexec: offset for relocation function
  arm64: kexec: kexec EL2 vectors
  arm64: kexec: configure trans_pgd page table for kexec
- arm64: kexec: enable MMU during kexec relocation
  Split into two patches:
  arm64: kexec: enable MMU during kexec relocation
  arm64: kexec: remove head from relocation argument
v6:
- Sync with mainline tip
- Added Acked's from Dave Young
v5:
- Addressed comments from Matthias Brugger: added review-by's, improved
  comments, and made cleanups to swsusp_arch_resume() in addition to
  create_safe_exec_page().
- Synced with mainline tip.
v4:
- Addressed comments from James Morse.
- Split "check pgd table allocation" into two patches, and moved to
  the beginning of series  for simpler backport of the fixes.
  Added "Fixes:" tags to commit logs.
- Changed "arm64, hibernate:" to "arm64: hibernate:"
- Added Reviewed-by's
- Moved "add PUD_SECT_RDONLY" earlier in series to be with other
  clean-ups
- Added "Derived from:" to arch/arm64/mm/trans_pgd.c
- Removed "flags" from trans_info
- Changed .trans_alloc_page assumption to return zeroed page.
- Simplify changes to trans_pgd_map_page(), by keeping the old
  code.
- Simplify changes to trans_pgd_create_copy, by keeping the old
  code.
- Removed: "add trans_pgd_create_empty"
- replace init_mm with NULL, and keep using non "__" version of
  populate functions.
v3:
- Split changes to create_safe_exec_page() into several patches for
  easier review as request by Mark Rutland. This is why this series
  has 3 more patches.
- Renamed trans_table to tans_pgd as agreed with Mark. The header
  comment in trans_pgd.c explains that trans stands for
  transitional page tables. Meaning they are used in transition
  between two kernels.
v2:
- Fixed hibernate bug reported by James Morse
- Addressed comments from James Morse:
  * More incremental changes to trans_table
  * Removed TRANS_FORCEMAP
  * Added kexec reboot data for image with 380M in size.

Enable MMU during kexec relocation in order to improve reboot performance.

If kexec functionality is used for a fast system update, with a minimal
downtime, the relocation of kernel + initramfs takes a significant portion
of r

[PATCH v10 04/18] arm64: trans_pgd: make trans_pgd_map_page generic

2021-01-25 Thread Pavel Tatashin
kexec is going to use a different allocator, so make
trans_pgd_map_page to accept allocator as an argument, and also
kexec is going to use a different map protection, so also pass
it via argument.

Signed-off-by: Pavel Tatashin 
Reviewed-by: Matthias Brugger 
---
 arch/arm64/include/asm/trans_pgd.h | 19 +--
 arch/arm64/kernel/hibernate.c  | 12 +++-
 arch/arm64/mm/trans_pgd.c  | 30 ++
 3 files changed, 50 insertions(+), 11 deletions(-)

diff --git a/arch/arm64/include/asm/trans_pgd.h 
b/arch/arm64/include/asm/trans_pgd.h
index 23153c13d1ce..b46409b25234 100644
--- a/arch/arm64/include/asm/trans_pgd.h
+++ b/arch/arm64/include/asm/trans_pgd.h
@@ -12,10 +12,25 @@
 #include 
 #include 
 
+/*
+ * trans_alloc_page
+ * - Allocator that should return exactly one zeroed page, if this
+ *   allocator fails, trans_pgd_create_copy() and trans_pgd_map_page()
+ *   return -ENOMEM error.
+ *
+ * trans_alloc_arg
+ * - Passed to trans_alloc_page as an argument
+ */
+
+struct trans_pgd_info {
+   void * (*trans_alloc_page)(void *arg);
+   void *trans_alloc_arg;
+};
+
 int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long start,
  unsigned long end);
 
-int trans_pgd_map_page(pgd_t *trans_pgd, void *page, unsigned long dst_addr,
-  pgprot_t pgprot);
+int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd,
+  void *page, unsigned long dst_addr, pgprot_t pgprot);
 
 #endif /* _ASM_TRANS_TABLE_H */
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 4a38662f0d90..c173f280bfea 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -176,6 +176,11 @@ int arch_hibernation_header_restore(void *addr)
 }
 EXPORT_SYMBOL(arch_hibernation_header_restore);
 
+static void *hibernate_page_alloc(void *arg)
+{
+   return (void *)get_safe_page((gfp_t)(unsigned long)arg);
+}
+
 /*
  * Copies length bytes, starting at src_start into an new page,
  * perform cache maintenance, then maps it at the specified address low
@@ -192,6 +197,11 @@ static int create_safe_exec_page(void *src_start, size_t 
length,
 unsigned long dst_addr,
 phys_addr_t *phys_dst_addr)
 {
+   struct trans_pgd_info trans_info = {
+   .trans_alloc_page   = hibernate_page_alloc,
+   .trans_alloc_arg= (void *)GFP_ATOMIC,
+   };
+
void *page = (void *)get_safe_page(GFP_ATOMIC);
pgd_t *trans_pgd;
int rc;
@@ -206,7 +216,7 @@ static int create_safe_exec_page(void *src_start, size_t 
length,
if (!trans_pgd)
return -ENOMEM;
 
-   rc = trans_pgd_map_page(trans_pgd, page, dst_addr,
+   rc = trans_pgd_map_page(&trans_info, trans_pgd, page, dst_addr,
PAGE_KERNEL_EXEC);
if (rc)
return rc;
diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c
index e048d1f5c912..f28eceba2242 100644
--- a/arch/arm64/mm/trans_pgd.c
+++ b/arch/arm64/mm/trans_pgd.c
@@ -25,6 +25,11 @@
 #include 
 #include 
 
+static void *trans_alloc(struct trans_pgd_info *info)
+{
+   return info->trans_alloc_page(info->trans_alloc_arg);
+}
+
 static void _copy_pte(pte_t *dst_ptep, pte_t *src_ptep, unsigned long addr)
 {
pte_t pte = READ_ONCE(*src_ptep);
@@ -201,9 +206,18 @@ int trans_pgd_create_copy(pgd_t **dst_pgdp, unsigned long 
start,
return rc;
 }
 
-int trans_pgd_map_page(pgd_t *trans_pgd, void *page,
-  unsigned long dst_addr,
-  pgprot_t pgprot)
+/*
+ * Add map entry to trans_pgd for a base-size page at PTE level.
+ * info:   contains allocator and its argument
+ * trans_pgd:  page table in which new map is added.
+ * page:   page to be mapped.
+ * dst_addr:   new VA address for the page
+ * pgprot: protection for the page.
+ *
+ * Returns 0 on success, and -ENOMEM on failure.
+ */
+int trans_pgd_map_page(struct trans_pgd_info *info, pgd_t *trans_pgd,
+  void *page, unsigned long dst_addr, pgprot_t pgprot)
 {
pgd_t *pgdp;
p4d_t *p4dp;
@@ -213,7 +227,7 @@ int trans_pgd_map_page(pgd_t *trans_pgd, void *page,
 
pgdp = pgd_offset_pgd(trans_pgd, dst_addr);
if (pgd_none(READ_ONCE(*pgdp))) {
-   p4dp = (void *)get_safe_page(GFP_ATOMIC);
+   p4dp = trans_alloc(info);
if (!pgdp)
return -ENOMEM;
pgd_populate(&init_mm, pgdp, p4dp);
@@ -221,7 +235,7 @@ int trans_pgd_map_page(pgd_t *trans_pgd, void *page,
 
p4dp = p4d_offset(pgdp, dst_addr);
if (p4d_none(READ_ONCE(*p4dp))) {
-   pudp = (void *)get_safe_page(GFP_ATOMIC);
+   pudp = trans_alloc(info);
if (!pudp)
return -ENOMEM;
p4d_popul

[PATCH v10 02/18] arm64: hibernate: variable pudp is used instead of pd4dp

2021-01-25 Thread Pavel Tatashin
There should be p4dp used when p4d page is allocated.
This is not a functional issue, but for the logical correctness this
should be fixed.

Fixes: e9f6376858b9 ("arm64: add support for folded p4d page tables")
Signed-off-by: Pavel Tatashin 
---
 arch/arm64/kernel/hibernate.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
index 9c9f47e9f7f4..0a54d81c90f9 100644
--- a/arch/arm64/kernel/hibernate.c
+++ b/arch/arm64/kernel/hibernate.c
@@ -190,10 +190,10 @@ static int trans_pgd_map_page(pgd_t *trans_pgd, void 
*page,
 
pgdp = pgd_offset_pgd(trans_pgd, dst_addr);
if (pgd_none(READ_ONCE(*pgdp))) {
-   pudp = (void *)get_safe_page(GFP_ATOMIC);
-   if (!pudp)
+   p4dp = (void *)get_safe_page(GFP_ATOMIC);
+   if (!pgdp)
return -ENOMEM;
-   pgd_populate(&init_mm, pgdp, pudp);
+   pgd_populate(&init_mm, pgdp, p4dp);
}
 
p4dp = p4d_offset(pgdp, dst_addr);
-- 
2.25.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v9 15/18] arm64: kexec: kexec EL2 vectors

2021-01-25 Thread Pavel Tatashin
> > +.macro el1_sync_64
> > + br  x4  /* Jump to new world from el2 */
> > + .fill 31, 4, 0  /* Set other 31 instr to zeroes */
> > +.endm
>
> The common idiom to write this is to align the beginning of the
> macro, and not to bother about what follows:
>
> .macro whatever
>  .align 7
>  br  x4
> .endm
>
> Specially given that 0 is an undefined instruction, and I really hate to
> see
> those in the actual text. On the contrary, .align generates NOPs.

Fixed that.

>
> > +
> > +.macro invalid_vector label
> > +\label:
> > + b \label
> > + .fill 31, 4, 0  /* Set other 31 instr to zeroes */
> > +.endm
> > +
> > +/* el2 vectors - switch el2 here while we restore the memory image. */
> > + .align 11
> > +ENTRY(kexec_el2_vectors)
>
> Please see commit 617a2f392c92 ("arm64: kvm: Annotate assembly using
> modern
> annoations"), and follow the same pattern.

Fixed that as well.

Thank you,
Pasha

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH] kernel/kexec: remove the lock operation of system_transition_mutex

2021-01-25 Thread Rafael J. Wysocki
On Mon, Jan 25, 2021 at 10:05 AM Pingfan Liu  wrote:
>
> On Fri, Jan 22, 2021 at 3:42 PM Baoquan He  wrote:
> >
> > Function kernel_kexec() is called with lock system_transition_mutex held
> > in reboot system call. While inside kernel_kexec(), it will acquire
> > system_transition_mutex agin. This will lead to dead lock.
> >
> > The dead lock should be easily triggered, it hasn't caused any failure
> > report just because the feature 'kexec jump' is almost not used by anyone
> > as far as I know. An inquiry can be made about who is using 'kexec jump'
> > and where it's used. Before that, let's simply remove the lock operation
> > inside CONFIG_KEXEC_JUMP ifdeffery scope.
> >
> > Signed-off-by: Baoquan He 
> > Reported-by: Dan Carpenter 
> > Reviewed-by: Pingfan Liu 
> > ---
> >  kernel/kexec_core.c | 2 --
> >  1 file changed, 2 deletions(-)
> >
> > diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
> > index 80905e5aa8ae..a0b6780740c8 100644
> > --- a/kernel/kexec_core.c
> > +++ b/kernel/kexec_core.c
> > @@ -1134,7 +1134,6 @@ int kernel_kexec(void)
> >
> >  #ifdef CONFIG_KEXEC_JUMP
> > if (kexec_image->preserve_context) {
> > -   lock_system_sleep();
> > pm_prepare_console();
> > error = freeze_processes();
> > if (error) {
> > @@ -1197,7 +1196,6 @@ int kernel_kexec(void)
> > thaw_processes();
> >   Restore_console:
> > pm_restore_console();
> > -   unlock_system_sleep();
> > }
> >  #endif
> >
> > --
> > 2.17.2
> >
> Reviewed-by: Pingfan Liu 

Applied as 5.11-rc material, thanks!

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: Issue in dmesg time with lockless ring buffer

2021-01-25 Thread John Ogness
On 2021-01-22, "J. Avila"  wrote:
> When doing some internal testing on a 5.10.4 kernel, we found that the
> time taken for dmesg seemed to increase from the order of milliseconds
> to the order of seconds when the dmesg size approached the ~1.2MB
> limit. After doing some digging, we found that by reverting all of the
> patches in printk/ up to and including
> 896fbe20b4e2333fb55cc9b9b783ebcc49eee7c7 ("use the lockless
> ringbuffer"), we were able to once more see normal dmesg times.
>
> This kernel had no meaningful diffs in the printk/ dir when compared
> to Linus' tree. This behavior was consistently reproducible using the
> following steps:
>
> 1) In one shell, run "time dmesg > /dev/null"
> 2) In another, constantly write to /dev/kmsg
>
> Within ~5 minutes, we saw that dmesg times increased to 1 second, only
> increasing further from there. Is this a known issue?

The last couple days I have tried to reproduce this issue with no
success.

Is your dmesg using /dev/kmsg or syslog() to read the buffer?

Are there any syslog daemons or systemd running? Perhaps you can run
your test within an initrd to see if this effect is still visible?

John Ogness

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH] kernel/kexec: remove the lock operation of system_transition_mutex

2021-01-25 Thread Pingfan Liu
On Fri, Jan 22, 2021 at 3:42 PM Baoquan He  wrote:
>
> Function kernel_kexec() is called with lock system_transition_mutex held
> in reboot system call. While inside kernel_kexec(), it will acquire
> system_transition_mutex agin. This will lead to dead lock.
>
> The dead lock should be easily triggered, it hasn't caused any failure
> report just because the feature 'kexec jump' is almost not used by anyone
> as far as I know. An inquiry can be made about who is using 'kexec jump'
> and where it's used. Before that, let's simply remove the lock operation
> inside CONFIG_KEXEC_JUMP ifdeffery scope.
>
> Signed-off-by: Baoquan He 
> Reported-by: Dan Carpenter 
> Reviewed-by: Pingfan Liu 
> ---
>  kernel/kexec_core.c | 2 --
>  1 file changed, 2 deletions(-)
>
> diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
> index 80905e5aa8ae..a0b6780740c8 100644
> --- a/kernel/kexec_core.c
> +++ b/kernel/kexec_core.c
> @@ -1134,7 +1134,6 @@ int kernel_kexec(void)
>
>  #ifdef CONFIG_KEXEC_JUMP
> if (kexec_image->preserve_context) {
> -   lock_system_sleep();
> pm_prepare_console();
> error = freeze_processes();
> if (error) {
> @@ -1197,7 +1196,6 @@ int kernel_kexec(void)
> thaw_processes();
>   Restore_console:
> pm_restore_console();
> -   unlock_system_sleep();
> }
>  #endif
>
> --
> 2.17.2
>
Reviewed-by: Pingfan Liu 

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec