[PATCH v8 RESEND 4/4] kdump/vmcore: support encrypted old memory with SME enabled

2018-09-29 Thread Lianbo Jiang
In kdump kernel, the old memory needs to be dumped into vmcore file.
If SME is enabled in the first kernel, the old memory has to be
remapped with the memory encryption mask, which will be automatically
decrypted when read from DRAM.

For SME kdump, there are two cases that doesn't support:

 --
| first-kernel | second-kernel | kdump support |
|  (mem_encrypt=on|off)|   (yes|no)|
|--+---+---|
| on   | on| yes   |
| off  | off   | yes   |
| on   | off   | no|
| off  | on| no|
|__|___|___|

1. SME is enabled in the first kernel, but SME is disabled in kdump kernel
In this case, because the old memory is encrypted, it can't be decrypted.
The root cause is that the encryption key is not visible to any software
runnint on the CPU cores(AMD cpu with SME), and is randomly generated on
eache system reset. That is to say, kdump kernel won't have a chance to
get the encryption key. So the encrypted memory can not be decrypted
unless SME is active.

2. SME is disabled in the first kernel, but SME is enabled in kdump kernel
On the one hand, the old memory is decrypted, the old memory can be dumped
as usual, so SME doesn't need to be enabled in kdump kernel; On the other
hand, it will increase the complexity of the code, because that will have
to consider how to pass the SME flag from the first kernel to the kdump
kernel, it is really too expensive to do this.

This patches are only for SME kdump, the patches don't support SEV kdump.

Signed-off-by: Lianbo Jiang 
Reviewed-by: Tom Lendacky 
---
Changes since v7:
1. Delete a file arch/x86/kernel/crash_dump_encrypt.c, and move the
copy_oldmem_page_encrypted() to arch/x86/kernel/crash_dump_64.c, also
rewrite some functions.(Suggested by Borislav)
2. Modify all code style issue.(Suggested by Borislav)
3. Remove a reduntant header file.(Suggested by Borislav)
4. Improve patch log.(Suggested by Borislav)
5. Modify compile error "fs/proc/vmcore.c:115: undefined reference
   to `copy_oldmem_page_encrypted'" 

 arch/x86/kernel/crash_dump_64.c | 65 -
 fs/proc/vmcore.c| 24 +---
 include/linux/crash_dump.h  | 13 +++
 3 files changed, 87 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 4f2e0778feac..6adbde592c44 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -12,7 +12,7 @@
 #include 
 
 /**
- * copy_oldmem_page - copy one page from "oldmem"
+ * __copy_oldmem_page - copy one page from "old memory encrypted or decrypted"
  * @pfn: page frame number to be copied
  * @buf: target memory address for the copy; this can be in kernel address
  * space or user address space (see @userbuf)
@@ -20,31 +20,78 @@
  * @offset: offset in bytes into the page (based on pfn) to begin the copy
  * @userbuf: if set, @buf is in user address space, use copy_to_user(),
  * otherwise @buf is in kernel address space, use memcpy().
+ * @encrypted: if true, the old memory is encrypted.
+ * if false, the old memory is decrypted.
  *
- * Copy a page from "oldmem". For this page, there is no pte mapped
- * in the current kernel. We stitch up a pte, similar to kmap_atomic.
+ * Copy a page from "old memory encrypted or decrypted". For this page, there
+ * is no pte mapped in the current kernel. We stitch up a pte, similar to
+ * kmap_atomic.
  */
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
-   size_t csize, unsigned long offset, int userbuf)
+static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
+ unsigned long offset, int userbuf,
+ bool encrypted)
 {
void  *vaddr;
 
if (!csize)
return 0;
 
-   vaddr = ioremap_cache(pfn << PAGE_SHIFT, PAGE_SIZE);
+   if (encrypted)
+   vaddr = (__force void *)ioremap_encrypted(pfn << PAGE_SHIFT, 
PAGE_SIZE);
+   else
+   vaddr = (__force void *)ioremap_cache(pfn << PAGE_SHIFT, 
PAGE_SIZE);
+
if (!vaddr)
return -ENOMEM;
 
if (userbuf) {
-   if (copy_to_user(buf, vaddr + offset, csize)) {
-   iounmap(vaddr);
+   if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
+   iounmap((void __iomem *)vaddr);
return -EFAULT;
}
} else
memcpy(buf, vaddr + offset, csize);
 
set_iounmap_nonlazy();
-   iounmap(vaddr);
+   iounmap((void __iomem *)vaddr);
return csize;
 }
+
+/**
+ * copy_oldmem_page - copy one page from "old memory decrypted"
+ * @pfn: page frame number to be copied
+ * @buf: target memory 

[PATCH v8 RESEND 3/4] iommu/amd: Remap the device table of IOMMU with the memory encryption mask for kdump

2018-09-29 Thread Lianbo Jiang
In kdump kernel, it will copy the device table of IOMMU from the old device
table, which is encrypted when SME is enabled in the first kernel. So the
old device table has to be remapped with the memory encryption mask.

Signed-off-by: Lianbo Jiang 
Reviewed-by: Tom Lendacky 
Acked-by: Joerg Roedel 
---
 drivers/iommu/amd_iommu_init.c | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
index 84b3e4445d46..3931c7de7c69 100644
--- a/drivers/iommu/amd_iommu_init.c
+++ b/drivers/iommu/amd_iommu_init.c
@@ -902,12 +902,22 @@ static bool copy_device_table(void)
}
}
 
-   old_devtb_phys = entry & PAGE_MASK;
+   /*
+* When SME is enabled in the first kernel, the entry includes the
+* memory encryption mask(sme_me_mask), we must remove the memory
+* encryption mask to obtain the true physical address in kdump kernel.
+*/
+   old_devtb_phys = __sme_clr(entry) & PAGE_MASK;
+
if (old_devtb_phys >= 0x1ULL) {
pr_err("The address of old device table is above 4G, not 
trustworthy!\n");
return false;
}
-   old_devtb = memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB);
+   old_devtb = (sme_active() && is_kdump_kernel())
+   ? (__force void *)ioremap_encrypted(old_devtb_phys,
+   dev_table_size)
+   : memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB);
+
if (!old_devtb)
return false;
 
-- 
2.17.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH v8 RESEND 2/4] kexec: allocate decrypted control pages for kdump in case SME is enabled

2018-09-29 Thread Lianbo Jiang
When SME is enabled in the first kernel, it needs to allocate decrypted
pages for kdump, because when it boots to the kdump kernel, these pages
won't be accessed encrypted at the initial stage, in order to boot the
kdump kernel in the same manner as originally booted.

Signed-off-by: Lianbo Jiang 
Reviewed-by: Tom Lendacky 
---
Changes since v7:
1. Modify comment in the code.(Suggested by Borislav)
2. Improve patch log.(Suggested by Borislav)

 kernel/kexec_core.c | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 23a83a4da38a..6353daaee7f1 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -471,6 +471,18 @@ static struct page 
*kimage_alloc_crash_control_pages(struct kimage *image,
}
}
 
+   if (pages) {
+   /*
+* For kdump, it needs to ensure that these pages are
+* decrypted if SME is enabled.
+* By the way, it is unnecessary to call the arch_
+* kexec_pre_free_pages(), because these pages are
+* reserved memory and once the crash kernel is done,
+* it will always remain in these memory until reboot
+* or unloading.
+*/
+   arch_kexec_post_alloc_pages(page_address(pages), 1 << order, 0);
+   }
return pages;
 }
 
@@ -867,6 +879,7 @@ static int kimage_load_crash_segment(struct kimage *image,
result  = -ENOMEM;
goto out;
}
+   arch_kexec_post_alloc_pages(page_address(page), 1, 0);
ptr = kmap(page);
ptr += maddr & ~PAGE_MASK;
mchunk = min_t(size_t, mbytes,
@@ -884,6 +897,7 @@ static int kimage_load_crash_segment(struct kimage *image,
result = copy_from_user(ptr, buf, uchunk);
kexec_flush_icache_page(page);
kunmap(page);
+   arch_kexec_pre_free_pages(page_address(page), 1);
if (result) {
result = -EFAULT;
goto out;
-- 
2.17.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH v8 RESEND 1/4] x86/ioremap: add a function ioremap_encrypted() to remap kdump old memory

2018-09-29 Thread Lianbo Jiang
When SME is enabled on AMD machine, the memory is encrypted in the first
kernel. In this case, SME also needs to be enabled in kdump kernel, and
the old memory has to be remapped with the memory encryption mask.

Here we only talk about the case that SME is active in the first kernel,
and only care it's active too in kdump kernel. there are four cases that
need considered.

a. dump vmcore
   It is encrypted in the first kernel, and needs be read out in kdump
   kernel.

b. crash notes
   When dumping vmcore, the people usually need to read the useful
   information from notes, and the notes is also encrypted.

c. iommu device table
   It is allocated by kernel, need fill its pointer into mmio of amd iommu.
   It's encrypted in the first kernel, need read the old content to analyze
   and get useful information.

d. mmio of amd iommu
   Register reported by amd firmware, it's not RAM, which won't be
   encrypted in both the first kernel and kdump kernel.

To achieve the goal, the solution is:
1. add a new bool parameter "encrypted" to __ioremap_caller()
   It is a low level function, and check the newly added parameter, if it's
   true and in kdump kernel, will remap the memory with sme mask.

2. add a new function ioremap_encrypted() to explicitly passed in a "true"
   value for "encrypted".
   For above a, b, c, kdump kernel will call ioremap_encrypted();

3. adjust all existed ioremap wrapper functions, passed in "false" for
   encrypted to make them as before.

   ioremap_encrypted()\
   ioremap_cache() |
   ioremap_prot()  |
   ioremap_wt()|->__ioremap_caller()
   ioremap_wc()|
   ioremap_uc()|
   ioremap_nocache()  /

Signed-off-by: Lianbo Jiang 
Reviewed-by: Tom Lendacky 
---
Changes since v7:
1. Remove a redundant header file "linux/crash_dump.h".(Suggested by
Borislav)
2. Modify code style issue.(Suggested by Borislav)
3. Improve patch log.(Suggested by Baoquan)

 arch/x86/include/asm/io.h |  2 ++
 arch/x86/mm/ioremap.c | 24 
 2 files changed, 18 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index 6de64840dd22..b7b0bf36c400 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -192,6 +192,8 @@ extern void __iomem *ioremap_cache(resource_size_t offset, 
unsigned long size);
 #define ioremap_cache ioremap_cache
 extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size, 
unsigned long prot_val);
 #define ioremap_prot ioremap_prot
+extern void __iomem *ioremap_encrypted(resource_size_t phys_addr, unsigned 
long size);
+#define ioremap_encrypted ioremap_encrypted
 
 /**
  * ioremap -   map bus memory into CPU space
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index c63a545ec199..24e0920a9b25 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -131,7 +131,8 @@ static void __ioremap_check_mem(resource_size_t addr, 
unsigned long size,
  * caller shouldn't need to know that small detail.
  */
 static void __iomem *__ioremap_caller(resource_size_t phys_addr,
-   unsigned long size, enum page_cache_mode pcm, void *caller)
+   unsigned long size, enum page_cache_mode pcm,
+   void *caller, bool encrypted)
 {
unsigned long offset, vaddr;
resource_size_t last_addr;
@@ -199,7 +200,7 @@ static void __iomem *__ioremap_caller(resource_size_t 
phys_addr,
 * resulting mapping.
 */
prot = PAGE_KERNEL_IO;
-   if (sev_active() && mem_flags.desc_other)
+   if ((sev_active() && mem_flags.desc_other) || encrypted)
prot = pgprot_encrypted(prot);
 
switch (pcm) {
@@ -291,7 +292,7 @@ void __iomem *ioremap_nocache(resource_size_t phys_addr, 
unsigned long size)
enum page_cache_mode pcm = _PAGE_CACHE_MODE_UC_MINUS;
 
return __ioremap_caller(phys_addr, size, pcm,
-   __builtin_return_address(0));
+   __builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_nocache);
 
@@ -324,7 +325,7 @@ void __iomem *ioremap_uc(resource_size_t phys_addr, 
unsigned long size)
enum page_cache_mode pcm = _PAGE_CACHE_MODE_UC;
 
return __ioremap_caller(phys_addr, size, pcm,
-   __builtin_return_address(0));
+   __builtin_return_address(0), false);
 }
 EXPORT_SYMBOL_GPL(ioremap_uc);
 
@@ -341,7 +342,7 @@ EXPORT_SYMBOL_GPL(ioremap_uc);
 void __iomem *ioremap_wc(resource_size_t phys_addr, unsigned long size)
 {
return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WC,
-   __builtin_return_address(0));
+   __builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_wc);
 
@@ -358,14 +359,21 @@ EXPORT_SYMBOL(ioremap_wc);
 void __iomem *ioremap_wt(resource_size_t phys_addr, unsigned long size)
 {
return 

[PATCH v8 RESEND 0/4] Support kdump for AMD secure memory encryption(SME)

2018-09-29 Thread Lianbo Jiang
When SME is enabled on AMD machine, it also needs to support kdump. Because
the memory is encrypted in the first kernel, the old memory will be remapped
to kdump kernel for dumping data, and SME is also enabled in kdump kernel,
otherwise the old memory can not be decrypted.

For the kdump, it is necessary to distinguish whether the memory is encrypted.
Furthermore, that should also know which part of the memory is encrypted or
decrypted. It will appropriately remap the memory according to the specific
situation in order to tell cpu how to access the memory.

As we know, a page of memory that is marked as encrypted, which will be
automatically decrypted when read from DRAM, and will also be automatically
encrypted when written to DRAM. If the old memory is encrypted, it has to
remap the old memory with the memory encryption mask, which will automatically
decrypt the old memory when read from DRAM.

For kdump(SME), there are two cases that doesn't support:

 --
| first-kernel | second-kernel | kdump support |
|  (mem_encrypt=on|off)|   (yes|no)|
|--+---+---|
| on   | on| yes   |
| off  | off   | yes   |
| on   | off   | no|
| off  | on| no|
|__|___|___|

1. SME is enabled in the first kernel, but SME is disabled in kdump kernel
In this case, because the old memory is encrypted, it can't be decrypted.
The root cause is that the encryption key is not visible to any software
runnint on the CPU cores(AMD cpu with SME), and is randomly generated on
eache system reset. That is to say, kdump kernel won't have a chance to
get the encryption key. So the encrypted memory can not be decrypted
unless SME is active.

2. SME is disabled in the first kernel, but SME is enabled in kdump kernel
It is unnecessary to support in this case, because the old memory is
dencrypted, the old memory can be dumped as usual, that doesn't need to
enable SME in kdump kernel. Another, If the scenario must be supported, it
will increase the complexity of the code, that will have to consider how to
pass the SME flag from the first kernel to the kdump kernel, in order to let
kdump kernel know that whether the old memory is encrypted.

There are two methods to pass the SME flag to the kdump kernel. The first
method is to modify the assembly code, which includes some common code and
the path is too long. The second method is to use kexec tool, which could
require the SME flag to be exported in the first kernel by "proc" or "sysfs",
kexec tools will read the SME flag from "proc" or "sysfs" when we use kexec
tools to load image, subsequently the SME flag will be saved in boot_params,
that can properly remap the old memory according to the previously saved SME
flag. But it is too expensive to do this.

This patches are only for SME kdump, the patches don't support SEV kdump.

Test tools:
makedumpfile[v1.6.3]: https://github.com/LianboJ/makedumpfile
commit  "A draft for kdump vmcore about AMD SME"
Note: This patch can only dump vmcore in the case of SME enabled.

crash-7.2.3: https://github.com/crash-utility/crash.git
commit <001f77a05585> "Fix for Linux 4.19-rc1 and later kernels that contain 
kernel commit <7290d5809571>"

kexec-tools-2.0.17: 
git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
commit  "kexec: fix for "Unhandled rela relocation: 
R_X86_64_PLT32" error"

Note:
Before you load the kernel and initramfs for kdump, this patch(
http://lists.infradead.org/pipermail/kexec/2018-September/021460.html) must be 
merged
to kexec-tools, and then the kdump kernel will work well. Because there is a 
patch
which is removed based on v6(x86/ioremap: strengthen the logic in
early_memremap_pgprot_adjust() to adjust encryption mask).

Test environment:
HP ProLiant DL385Gen10 AMD EPYC 7251
8-Core Processor
32768 MB memory
600 GB disk space

Linux 4.19-rc5:
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
commit <6bf4ca7fbc85> "Linux 4.19-rc5"

Reference:
AMD64 Architecture Programmer's Manual
https://support.amd.com/TechDocs/24593.pdf

Changes since v6:
1. There is a patch which is removed based on v6.
(x86/ioremap: strengthen the logic in early_memremap_pgprot_adjust() to adjust 
encryption mask)
Dave Young suggests that this patch can be removed and fix the kexec-tools.
Reference: 
http://lists.infradead.org/pipermail/kexec/2018-September/021460.html)
2. Update the patch log.

Changes since v7:
1. Improve patch log for patch 1/4(Suggested by Baoquan He)
2. Add Reviewed-by for all patches(Tom Lendacky )
3. Add Acked-by for patch 3/4(Joerg Roedel )
4. Remove header file(linux/crash_dump.h) from
arch/x86/mm/ioremap.c(Suggested by Borislav)
5. Modify comment and patch log for patch 2/4(Suggested by Borislav)
6. Delete a file arch/x86/kernel/crash_dump_encrypt.c and rewrite some

Re: [PATCH 4/4 v8] kdump/vmcore: support encrypted old memory with SME enabled

2018-09-29 Thread lijiang
在 2018年09月30日 02:25, kbuild test robot 写道:
> Hi Lianbo,
> 
> Thank you for the patch! Yet something to improve:
> 
> [auto build test ERROR on sof-driver-fuweitax/master]
> [also build test ERROR on v4.19-rc5 next-20180928]
> [if your patch is applied to the wrong git tree, please drop us a note to 
> help improve the system]
> 
> url:
> https://github.com/0day-ci/linux/commits/Lianbo-Jiang/Support-kdump-for-AMD-secure-memory-encryption-SME/20180930-001539
> base:   https://github.com/fuweitax/linux master
> config: i386-randconfig-x0-09300051 (attached as .config)
> compiler: gcc-5 (Debian 5.5.0-3) 5.4.1 20171010
> reproduce:
> # save the attached .config to linux build tree
> make ARCH=i386 
> 
> All errors (new ones prefixed by >>):
> 
>fs/proc/vmcore.o: In function `read_from_oldmem':
>>> fs/proc/vmcore.c:115: undefined reference to `copy_oldmem_page_encrypted'
> 
Ok, i will fix this compile error, and post again later.

Thanks.
> vim +115 fs/proc/vmcore.c
> 
> 88
> 89/* Reads a page from the oldmem device from given offset. */
> 90static ssize_t read_from_oldmem(char *buf, size_t count,
> 91u64 *ppos, int userbuf,
> 92bool encrypted)
> 93{
> 94unsigned long pfn, offset;
> 95size_t nr_bytes;
> 96ssize_t read = 0, tmp;
> 97
> 98if (!count)
> 99return 0;
>100
>101offset = (unsigned long)(*ppos % PAGE_SIZE);
>102pfn = (unsigned long)(*ppos / PAGE_SIZE);
>103
>104do {
>105if (count > (PAGE_SIZE - offset))
>106nr_bytes = PAGE_SIZE - offset;
>107else
>108nr_bytes = count;
>109
>110/* If pfn is not ram, return zeros for sparse 
> dump files */
>111if (pfn_is_ram(pfn) == 0)
>112memset(buf, 0, nr_bytes);
>113else {
>114if (encrypted)
>  > 115tmp = 
> copy_oldmem_page_encrypted(pfn, buf,
>116
>  nr_bytes,
>117
>  offset,
>118
>  userbuf);
>119else
>120tmp = copy_oldmem_page(pfn, 
> buf, nr_bytes,
>121   offset, 
> userbuf);
>122
>123if (tmp < 0)
>124return tmp;
>125}
>126*ppos += nr_bytes;
>127count -= nr_bytes;
>128buf += nr_bytes;
>129read += nr_bytes;
>130++pfn;
>131offset = 0;
>132} while (count);
>133
>134return read;
>135}
>136
> 
> ---
> 0-DAY kernel test infrastructureOpen Source Technology Center
> https://lists.01.org/pipermail/kbuild-all   Intel Corporation
> 

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH 4/4 v8] kdump/vmcore: support encrypted old memory with SME enabled

2018-09-29 Thread Lianbo Jiang
In kdump kernel, the old memory needs to be dumped into vmcore file.
If SME is enabled in the first kernel, the old memory has to be
remapped with the memory encryption mask, which will be automatically
decrypted when read from DRAM.

For SME kdump, there are two cases that doesn't support:

 --
| first-kernel | second-kernel | kdump support |
|  (mem_encrypt=on|off)|   (yes|no)|
|--+---+---|
| on   | on| yes   |
| off  | off   | yes   |
| on   | off   | no|
| off  | on| no|
|__|___|___|

1. SME is enabled in the first kernel, but SME is disabled in kdump kernel
In this case, because the old memory is encrypted, it can't be decrypted.
The root cause is that the encryption key is not visible to any software
runnint on the CPU cores(AMD cpu with SME), and is randomly generated on
eache system reset. That is to say, kdump kernel won't have a chance to
get the encryption key. So the encrypted memory can not be decrypted
unless SME is active.

2. SME is disabled in the first kernel, but SME is enabled in kdump kernel
On the one hand, the old memory is decrypted, the old memory can be dumped
as usual, so SME doesn't need to be enabled in kdump kernel; On the other
hand, it will increase the complexity of the code, because that will have
to consider how to pass the SME flag from the first kernel to the kdump
kernel, it is really too expensive to do this.

This patches are only for SME kdump, the patches don't support SEV kdump.

Signed-off-by: Lianbo Jiang 
Reviewed-by: Tom Lendacky 
---
Changes since v7:
1. Delete a file arch/x86/kernel/crash_dump_encrypt.c, and move the
copy_oldmem_page_encrypted() to arch/x86/kernel/crash_dump_64.c, also
rewrite some functions.(Suggested by Borislav)
2. Modify all code style issue.(Suggested by Borislav)
3. Remove a reduntant header file.(Suggested by Borislav)
4. Improve patch log.(Suggested by Borislav)

 arch/x86/kernel/crash_dump_64.c | 65 -
 fs/proc/vmcore.c| 24 +---
 include/linux/crash_dump.h  |  3 ++
 3 files changed, 77 insertions(+), 15 deletions(-)

diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 4f2e0778feac..6adbde592c44 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -12,7 +12,7 @@
 #include 
 
 /**
- * copy_oldmem_page - copy one page from "oldmem"
+ * __copy_oldmem_page - copy one page from "old memory encrypted or decrypted"
  * @pfn: page frame number to be copied
  * @buf: target memory address for the copy; this can be in kernel address
  * space or user address space (see @userbuf)
@@ -20,31 +20,78 @@
  * @offset: offset in bytes into the page (based on pfn) to begin the copy
  * @userbuf: if set, @buf is in user address space, use copy_to_user(),
  * otherwise @buf is in kernel address space, use memcpy().
+ * @encrypted: if true, the old memory is encrypted.
+ * if false, the old memory is decrypted.
  *
- * Copy a page from "oldmem". For this page, there is no pte mapped
- * in the current kernel. We stitch up a pte, similar to kmap_atomic.
+ * Copy a page from "old memory encrypted or decrypted". For this page, there
+ * is no pte mapped in the current kernel. We stitch up a pte, similar to
+ * kmap_atomic.
  */
-ssize_t copy_oldmem_page(unsigned long pfn, char *buf,
-   size_t csize, unsigned long offset, int userbuf)
+static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
+ unsigned long offset, int userbuf,
+ bool encrypted)
 {
void  *vaddr;
 
if (!csize)
return 0;
 
-   vaddr = ioremap_cache(pfn << PAGE_SHIFT, PAGE_SIZE);
+   if (encrypted)
+   vaddr = (__force void *)ioremap_encrypted(pfn << PAGE_SHIFT, 
PAGE_SIZE);
+   else
+   vaddr = (__force void *)ioremap_cache(pfn << PAGE_SHIFT, 
PAGE_SIZE);
+
if (!vaddr)
return -ENOMEM;
 
if (userbuf) {
-   if (copy_to_user(buf, vaddr + offset, csize)) {
-   iounmap(vaddr);
+   if (copy_to_user((void __user *)buf, vaddr + offset, csize)) {
+   iounmap((void __iomem *)vaddr);
return -EFAULT;
}
} else
memcpy(buf, vaddr + offset, csize);
 
set_iounmap_nonlazy();
-   iounmap(vaddr);
+   iounmap((void __iomem *)vaddr);
return csize;
 }
+
+/**
+ * copy_oldmem_page - copy one page from "old memory decrypted"
+ * @pfn: page frame number to be copied
+ * @buf: target memory address for the copy; this can be in kernel address
+ * space or user address space (see @userbuf)
+ * 

[PATCH 3/4 v8] iommu/amd: Remap the device table of IOMMU with the memory encryption mask for kdump

2018-09-29 Thread Lianbo Jiang
In kdump kernel, it will copy the device table of IOMMU from the old device
table, which is encrypted when SME is enabled in the first kernel. So the
old device table has to be remapped with the memory encryption mask.

Signed-off-by: Lianbo Jiang 
Reviewed-by: Tom Lendacky 
Acked-by: Joerg Roedel 
---
 drivers/iommu/amd_iommu_init.c | 14 --
 1 file changed, 12 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
index 84b3e4445d46..3931c7de7c69 100644
--- a/drivers/iommu/amd_iommu_init.c
+++ b/drivers/iommu/amd_iommu_init.c
@@ -902,12 +902,22 @@ static bool copy_device_table(void)
}
}
 
-   old_devtb_phys = entry & PAGE_MASK;
+   /*
+* When SME is enabled in the first kernel, the entry includes the
+* memory encryption mask(sme_me_mask), we must remove the memory
+* encryption mask to obtain the true physical address in kdump kernel.
+*/
+   old_devtb_phys = __sme_clr(entry) & PAGE_MASK;
+
if (old_devtb_phys >= 0x1ULL) {
pr_err("The address of old device table is above 4G, not 
trustworthy!\n");
return false;
}
-   old_devtb = memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB);
+   old_devtb = (sme_active() && is_kdump_kernel())
+   ? (__force void *)ioremap_encrypted(old_devtb_phys,
+   dev_table_size)
+   : memremap(old_devtb_phys, dev_table_size, MEMREMAP_WB);
+
if (!old_devtb)
return false;
 
-- 
2.17.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH 1/4 v8] x86/ioremap: add a function ioremap_encrypted() to remap kdump old memory

2018-09-29 Thread Lianbo Jiang
When SME is enabled on AMD machine, the memory is encrypted in the first
kernel. In this case, SME also needs to be enabled in kdump kernel, and
the old memory has to be remapped with the memory encryption mask.

Here we only talk about the case that SME is active in the first kernel,
and only care it's active too in kdump kernel. there are four cases that
need considered.

a. dump vmcore
   It is encrypted in the first kernel, and needs be read out in kdump
   kernel.

b. crash notes
   When dumping vmcore, the people usually need to read the useful
   information from notes, and the notes is also encrypted.

c. iommu device table
   It is allocated by kernel, need fill its pointer into mmio of amd iommu.
   It's encrypted in the first kernel, need read the old content to analyze
   and get useful information.

d. mmio of amd iommu
   Register reported by amd firmware, it's not RAM, which won't be
   encrypted in both the first kernel and kdump kernel.

To achieve the goal, the solution is:
1. add a new bool parameter "encrypted" to __ioremap_caller()
   It is a low level function, and check the newly added parameter, if it's
   true and in kdump kernel, will remap the memory with sme mask.

2. add a new function ioremap_encrypted() to explicitly passed in a "true"
   value for "encrypted".
   For above a, b, c, kdump kernel will call ioremap_encrypted();

3. adjust all existed ioremap wrapper functions, passed in "false" for
   encrypted to make them as before.

   ioremap_encrypted()\
   ioremap_cache() |
   ioremap_prot()  |
   ioremap_wt()|->__ioremap_caller()
   ioremap_wc()|
   ioremap_uc()|
   ioremap_nocache()  /

Signed-off-by: Lianbo Jiang 
Reviewed-by: Tom Lendacky 
---
Changes since v7:
1. Remove a redundant header file "linux/crash_dump.h".(Suggested by
Borislav)
2. Modify code style issue.(Suggested by Borislav)
3. Improve patch log.(Suggested by Baoquan)

 arch/x86/include/asm/io.h |  2 ++
 arch/x86/mm/ioremap.c | 24 
 2 files changed, 18 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index 6de64840dd22..b7b0bf36c400 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -192,6 +192,8 @@ extern void __iomem *ioremap_cache(resource_size_t offset, 
unsigned long size);
 #define ioremap_cache ioremap_cache
 extern void __iomem *ioremap_prot(resource_size_t offset, unsigned long size, 
unsigned long prot_val);
 #define ioremap_prot ioremap_prot
+extern void __iomem *ioremap_encrypted(resource_size_t phys_addr, unsigned 
long size);
+#define ioremap_encrypted ioremap_encrypted
 
 /**
  * ioremap -   map bus memory into CPU space
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index c63a545ec199..24e0920a9b25 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -131,7 +131,8 @@ static void __ioremap_check_mem(resource_size_t addr, 
unsigned long size,
  * caller shouldn't need to know that small detail.
  */
 static void __iomem *__ioremap_caller(resource_size_t phys_addr,
-   unsigned long size, enum page_cache_mode pcm, void *caller)
+   unsigned long size, enum page_cache_mode pcm,
+   void *caller, bool encrypted)
 {
unsigned long offset, vaddr;
resource_size_t last_addr;
@@ -199,7 +200,7 @@ static void __iomem *__ioremap_caller(resource_size_t 
phys_addr,
 * resulting mapping.
 */
prot = PAGE_KERNEL_IO;
-   if (sev_active() && mem_flags.desc_other)
+   if ((sev_active() && mem_flags.desc_other) || encrypted)
prot = pgprot_encrypted(prot);
 
switch (pcm) {
@@ -291,7 +292,7 @@ void __iomem *ioremap_nocache(resource_size_t phys_addr, 
unsigned long size)
enum page_cache_mode pcm = _PAGE_CACHE_MODE_UC_MINUS;
 
return __ioremap_caller(phys_addr, size, pcm,
-   __builtin_return_address(0));
+   __builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_nocache);
 
@@ -324,7 +325,7 @@ void __iomem *ioremap_uc(resource_size_t phys_addr, 
unsigned long size)
enum page_cache_mode pcm = _PAGE_CACHE_MODE_UC;
 
return __ioremap_caller(phys_addr, size, pcm,
-   __builtin_return_address(0));
+   __builtin_return_address(0), false);
 }
 EXPORT_SYMBOL_GPL(ioremap_uc);
 
@@ -341,7 +342,7 @@ EXPORT_SYMBOL_GPL(ioremap_uc);
 void __iomem *ioremap_wc(resource_size_t phys_addr, unsigned long size)
 {
return __ioremap_caller(phys_addr, size, _PAGE_CACHE_MODE_WC,
-   __builtin_return_address(0));
+   __builtin_return_address(0), false);
 }
 EXPORT_SYMBOL(ioremap_wc);
 
@@ -358,14 +359,21 @@ EXPORT_SYMBOL(ioremap_wc);
 void __iomem *ioremap_wt(resource_size_t phys_addr, unsigned long size)
 {
return 

[PATCH 0/4 v8] Support kdump for AMD secure memory encryption(SME)

2018-09-29 Thread Lianbo Jiang
When SME is enabled on AMD machine, it also needs to support kdump. Because
the memory is encrypted in the first kernel, the old memory will be remapped
to kdump kernel for dumping data, and SME is also enabled in kdump kernel,
otherwise the old memory can not be decrypted.

For the kdump, it is necessary to distinguish whether the memory is encrypted.
Furthermore, that should also know which part of the memory is encrypted or
decrypted. It will appropriately remap the memory according to the specific
situation in order to tell cpu how to access the memory.

As we know, a page of memory that is marked as encrypted, which will be
automatically decrypted when read from DRAM, and will also be automatically
encrypted when written to DRAM. If the old memory is encrypted, it has to
remap the old memory with the memory encryption mask, which will automatically
decrypt the old memory when read from DRAM.

For kdump(SME), there are two cases that doesn't support:

 --
| first-kernel | second-kernel | kdump support |
|  (mem_encrypt=on|off)|   (yes|no)|
|--+---+---|
| on   | on| yes   |
| off  | off   | yes   |
| on   | off   | no|
| off  | on| no|
|__|___|___|

1. SME is enabled in the first kernel, but SME is disabled in kdump kernel
In this case, because the old memory is encrypted, it can't be decrypted.
The root cause is that the encryption key is not visible to any software
runnint on the CPU cores(AMD cpu with SME), and is randomly generated on
eache system reset. That is to say, kdump kernel won't have a chance to
get the encryption key. So the encrypted memory can not be decrypted
unless SME is active.

2. SME is disabled in the first kernel, but SME is enabled in kdump kernel
It is unnecessary to support in this case, because the old memory is
dencrypted, the old memory can be dumped as usual, that doesn't need to
enable SME in kdump kernel. Another, If the scenario must be supported, it
will increase the complexity of the code, that will have to consider how to
pass the SME flag from the first kernel to the kdump kernel, in order to let
kdump kernel know that whether the old memory is encrypted.

There are two methods to pass the SME flag to the kdump kernel. The first
method is to modify the assembly code, which includes some common code and
the path is too long. The second method is to use kexec tool, which could
require the SME flag to be exported in the first kernel by "proc" or "sysfs",
kexec tools will read the SME flag from "proc" or "sysfs" when we use kexec
tools to load image, subsequently the SME flag will be saved in boot_params,
that can properly remap the old memory according to the previously saved SME
flag. But it is too expensive to do this.

This patches are only for SME kdump, the patches don't support SEV kdump.

Test tools:
makedumpfile[v1.6.3]: https://github.com/LianboJ/makedumpfile
commit  "A draft for kdump vmcore about AMD SME"
Note: This patch can only dump vmcore in the case of SME enabled.

crash-7.2.3: https://github.com/crash-utility/crash.git
commit <001f77a05585> "Fix for Linux 4.19-rc1 and later kernels that contain 
kernel commit <7290d5809571>"

kexec-tools-2.0.17: 
git://git.kernel.org/pub/scm/utils/kernel/kexec/kexec-tools.git
commit  "kexec: fix for "Unhandled rela relocation: 
R_X86_64_PLT32" error"

Note:
Before you load the kernel and initramfs for kdump, this patch(
http://lists.infradead.org/pipermail/kexec/2018-September/021460.html) must be 
merged
to kexec-tools, and then the kdump kernel will work well. Because there is a 
patch
which is removed based on v6(x86/ioremap: strengthen the logic in
early_memremap_pgprot_adjust() to adjust encryption mask).

Test environment:
HP ProLiant DL385Gen10 AMD EPYC 7251
8-Core Processor
32768 MB memory
600 GB disk space

Linux 4.19-rc5:
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
commit <6bf4ca7fbc85> "Linux 4.19-rc5"

Reference:
AMD64 Architecture Programmer's Manual
https://support.amd.com/TechDocs/24593.pdf

Changes since v6:
1. There is a patch which is removed based on v6.
(x86/ioremap: strengthen the logic in early_memremap_pgprot_adjust() to adjust 
encryption mask)
Dave Young suggests that this patch can be removed and fix the kexec-tools.
Reference: 
http://lists.infradead.org/pipermail/kexec/2018-September/021460.html)
2. Update the patch log.

Changes since v7:
1. Improve patch log for patch 1/4(Suggested by Baoquan He)
2. Add Reviewed-by for all patches(Tom Lendacky )
3. Add Acked-by for patch 3/4(Joerg Roedel )
4. Remove header file(linux/crash_dump.h) from
arch/x86/mm/ioremap.c(Suggested by Borislav)
5. Modify comment and patch log for patch 2/4(Suggested by Borislav)
6. Delete a file arch/x86/kernel/crash_dump_encrypt.c and rewrite some

[PATCH 2/4 v8] kexec: allocate decrypted control pages for kdump in case SME is enabled

2018-09-29 Thread Lianbo Jiang
When SME is enabled in the first kernel, it needs to allocate decrypted
pages for kdump, because when it boots to the kdump kernel, these pages
won't be accessed encrypted at the initial stage, in order to boot the
kdump kernel in the same manner as originally booted.

Signed-off-by: Lianbo Jiang 
Reviewed-by: Tom Lendacky 
---
Changes since v7:
1. Modify comment in the code.(Suggested by Borislav)
2. Improve patch log.(Suggested by Borislav)

 kernel/kexec_core.c | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 23a83a4da38a..6353daaee7f1 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -471,6 +471,18 @@ static struct page 
*kimage_alloc_crash_control_pages(struct kimage *image,
}
}
 
+   if (pages) {
+   /*
+* For kdump, it needs to ensure that these pages are
+* decrypted if SME is enabled.
+* By the way, it is unnecessary to call the arch_
+* kexec_pre_free_pages(), because these pages are
+* reserved memory and once the crash kernel is done,
+* it will always remain in these memory until reboot
+* or unloading.
+*/
+   arch_kexec_post_alloc_pages(page_address(pages), 1 << order, 0);
+   }
return pages;
 }
 
@@ -867,6 +879,7 @@ static int kimage_load_crash_segment(struct kimage *image,
result  = -ENOMEM;
goto out;
}
+   arch_kexec_post_alloc_pages(page_address(page), 1, 0);
ptr = kmap(page);
ptr += maddr & ~PAGE_MASK;
mchunk = min_t(size_t, mbytes,
@@ -884,6 +897,7 @@ static int kimage_load_crash_segment(struct kimage *image,
result = copy_from_user(ptr, buf, uchunk);
kexec_flush_icache_page(page);
kunmap(page);
+   arch_kexec_pre_free_pages(page_address(page), 1);
if (result) {
result = -EFAULT;
goto out;
-- 
2.17.1


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v7 RESEND 4/4] kdump/vmcore: support encrypted old memory with SME enabled

2018-09-29 Thread lijiang
在 2018年09月29日 16:30, Borislav Petkov 写道:
> On Sat, Sep 29, 2018 at 02:24:52PM +0800, lijiang wrote:
>> At first, i added an input parameter for read_from_oldmem() because of 
>> encryption(SME). But
>> for avoiding to also add the same parameter for copy_oldmem_page(), so i 
>> added a new function
>> copy_oldmem_page_encrypted(). Maybe you had noticed that these functions 
>> were very similar.
> 
> If you have two very similar functions, you add a *static* workhorse function:
> 
> static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize, 
> unsigned long offset,
> int userbuf, bool encrypted)
> 
> and you define two wrappers:
> 
> copy_oldmem_page()
> copy_oldmem_page_encrypted()
> 
> which both call __copy_oldmem_page() with the appropriate parameters.
> 

Great! Previously i was afraid that the maintainer might disagree with
rewriting the function copy_oldmem_page().

That's really great. I will modify this patch and post the series again.

Thanks.
Lianbo
> But you do *not* do a separate compilation unit just because. None of
> the reasons you've mentioned warrant having a separate compilation
> unit while you already have *the* perfect place to put everything -
> arch/x86/kernel/crash_dump_64.c
> 
>> So sorry, here "oldmem encrypted" means the "old memory is encrypted".
> 
> I know what it means - I'm trying to explain to you to write it out
> in plain english and not use some strange constructs like "oldmem
> encrypted".
> 
> A reader would wonder: why is this thing semi-abbreviated and in
> quotation marks? Does that mean anything special?
> 
> Our comments should not be write-only. So after you've written it, try
> to read it as someone who sees the code for the first time and think
> hard whether she/he will understand it.
> 
> Do you catch my drift now?
> 
Yes, got it. Thanks for your valuable time and patience.

Regards,
Lianbo

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v7 RESEND 2/4] kexec: allocate unencrypted control pages for kdump in case SME is enabled

2018-09-29 Thread Borislav Petkov
On Fri, Sep 28, 2018 at 06:09:04PM +0800, lijiang wrote:
> But there are another cases, we might load or unload the crash kernel image 
> and
> initrafms, maybe again and again for test or debug, we don't reboot at once. 
> For

I don't think this qualifies even as a use case - this is what you do
during development.

-- 
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v7 RESEND 4/4] kdump/vmcore: support encrypted old memory with SME enabled

2018-09-29 Thread Borislav Petkov
On Sat, Sep 29, 2018 at 02:24:52PM +0800, lijiang wrote:
> At first, i added an input parameter for read_from_oldmem() because of 
> encryption(SME). But
> for avoiding to also add the same parameter for copy_oldmem_page(), so i 
> added a new function
> copy_oldmem_page_encrypted(). Maybe you had noticed that these functions were 
> very similar.

If you have two very similar functions, you add a *static* workhorse function:

static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize, 
unsigned long offset,
  int userbuf, bool encrypted)

and you define two wrappers:

copy_oldmem_page()
copy_oldmem_page_encrypted()

which both call __copy_oldmem_page() with the appropriate parameters.

But you do *not* do a separate compilation unit just because. None of
the reasons you've mentioned warrant having a separate compilation
unit while you already have *the* perfect place to put everything -
arch/x86/kernel/crash_dump_64.c

> So sorry, here "oldmem encrypted" means the "old memory is encrypted".

I know what it means - I'm trying to explain to you to write it out
in plain english and not use some strange constructs like "oldmem
encrypted".

A reader would wonder: why is this thing semi-abbreviated and in
quotation marks? Does that mean anything special?

Our comments should not be write-only. So after you've written it, try
to read it as someone who sees the code for the first time and think
hard whether she/he will understand it.

Do you catch my drift now?

-- 
Regards/Gruss,
Boris.

SUSE Linux GmbH, GF: Felix Imendörffer, Jane Smithard, Graham Norton, HRB 21284 
(AG Nürnberg)

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v15 06/16] of/fdt: add helper functions for handling properties

2018-09-29 Thread kbuild test robot
Hi AKASHI,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on arm64/for-next/core]
[also build test ERROR on v4.19-rc5 next-20180928]
[if your patch is applied to the wrong git tree, please drop us a note to help 
improve the system]

url:
https://github.com/0day-ci/linux/commits/AKASHI-Takahiro/arm64-kexec-add-kexec_file_load-support/20180928-151042
base:   https://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git 
for-next/core
config: i386-randconfig-s0-201838 (attached as .config)
compiler: gcc-6 (Debian 6.4.0-9) 6.4.0 20171026
reproduce:
# save the attached .config to linux build tree
make ARCH=i386 

All errors (new ones prefixed by >>):

   drivers/of/fdt.o: In function `fdt_setprop_reg':
>> drivers/of/fdt.c:1358: undefined reference to `fdt_address_cells'
>> drivers/of/fdt.c:1361: undefined reference to `fdt_size_cells'

vim +1358 drivers/of/fdt.c

  1348  
  1349  int fdt_setprop_reg(void *fdt, int nodeoffset, const char *name,
  1350  u64 addr, u64 size)
  1351  {
  1352  int addr_cells, size_cells;
  1353  char buf[sizeof(__be32) * 2 * 2];
  1354  /* assume dt_root_[addr|size]_cells <= 2 */
  1355  void *prop;
  1356  size_t buf_size;
  1357  
> 1358  addr_cells = fdt_address_cells(fdt, 0);
  1359  if (addr_cells < 0)
  1360  return addr_cells;
> 1361  size_cells = fdt_size_cells(fdt, 0);

---
0-DAY kernel test infrastructureOpen Source Technology Center
https://lists.01.org/pipermail/kbuild-all   Intel Corporation


.config.gz
Description: application/gzip
___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v7 RESEND 4/4] kdump/vmcore: support encrypted old memory with SME enabled

2018-09-29 Thread lijiang
在 2018年09月28日 16:38, Borislav Petkov 写道:
> On Thu, Sep 27, 2018 at 03:19:54PM +0800, Lianbo Jiang wrote:
>> In kdump kernel, we need to dump the old memory into vmcore file,if SME
>> is enabled in the first kernel, we have to remap the old memory with the
>> memory encryption mask, which will be automatically decrypted when we
>> read from DRAM.
>>
>> For SME kdump, there are two cases that doesn't support:
> 
> ... and which are simply silly to support.
> 

But the root cause is that the encryption key is not visible to any software
running on the CPU cores(AMD cpu with SME), and is randomly generated on each
system reset. That is to say, kdump kernel won't have a chance to get the
encryption key. So the encrypted memory can not be decrypted unless SME is
active. 

Thanks.
>>
>>  --
>> | first-kernel | second-kernel | kdump support |
>> |  (mem_encrypt=on|off)|   (yes|no)|
>> |--+---+---|
>> | on   | on| yes   |
>> | off  | off   | yes   |
>> | on   | off   | no|
>> | off  | on| no|
>> |__|___|___|
>>
>> 1. SME is enabled in the first kernel, but SME is disabled in kdump kernel
>> In this case, because the old memory is encrypted, we can't decrypt the
>> old memory.
>>
>> 2. SME is disabled in the first kernel, but SME is enabled in kdump kernel
>> On the one hand, the old memory is unencrypted, the old memory can be dumped
> 
> s/unencrypted/decrypted/g
> 
> But I mentioned that already.
> 

Thanks for your valuable advice to improve these patches.
The patch log and all code style issue will be improved.

>> as usual, we don't need to enable SME in kdump kernel; On the other hand, it
>> will increase the complexity of the code, we will have to consider how to
>> pass the SME flag from the first kernel to the kdump kernel, it is really
>> too expensive to do this.
>>
>> This patches are only for SME kdump, the patches don't support SEV kdump.
> 
> Please rewrite that commit message in passive voice. I.e., get rid of
> that "we".
> 
>>
>> Signed-off-by: Lianbo Jiang 
>> Reviewed-by: Tom Lendacky 
>> ---
>>  arch/x86/kernel/Makefile |  1 +
>>  arch/x86/kernel/crash_dump_encrypt.c | 53 
>>  fs/proc/vmcore.c | 21 +++
>>  include/linux/crash_dump.h   | 12 +++
>>  4 files changed, 81 insertions(+), 6 deletions(-)
>>  create mode 100644 arch/x86/kernel/crash_dump_encrypt.c
>>
>> diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
>> index 8824d01c0c35..dfbeae0e35ce 100644
>> --- a/arch/x86/kernel/Makefile
>> +++ b/arch/x86/kernel/Makefile
>> @@ -97,6 +97,7 @@ obj-$(CONFIG_KEXEC_CORE)   += machine_kexec_$(BITS).o
>>  obj-$(CONFIG_KEXEC_CORE)+= relocate_kernel_$(BITS).o crash.o
>>  obj-$(CONFIG_KEXEC_FILE)+= kexec-bzimage64.o
>>  obj-$(CONFIG_CRASH_DUMP)+= crash_dump_$(BITS).o
>> +obj-$(CONFIG_AMD_MEM_ENCRYPT)   += crash_dump_encrypt.o
> 
> No no.
> 
> This will build even in the CONFIG_CRASH_DUMP=n case.
> 
> Why does this need to be even a separate compilation unit? It is a file
> containing a single function?!?!
> 
> I would love to know what the logic behind this was...
> 
Good question. It's my pleasure.

At first, i added an input parameter for read_from_oldmem() because of 
encryption(SME). But
for avoiding to also add the same parameter for copy_oldmem_page(), so i added 
a new function
copy_oldmem_page_encrypted(). Maybe you had noticed that these functions were 
very similar. 

Sometimes, it is hard to make a choice about this. The reasons are as follows:

A. If i add the same input parameter for copy_oldmem_page(), which will cause 
more code changes,
   because the interface changes will affect other architectures, such as 
ARM/MIPS/POWERPC/S390.

B. If not A, i have to add a new function copy_oldmem_page_encrypted(), and 
also put it into a
   same file(arch/x86/kernel/crash_dump_64.c).
 
   Should i merge the similar functions into one function?
   ( copy_oldmem_page() and copy_oldmem_page_encrypted() )

   a. If the answer is no, i will have two choices:
  1). put these functions into a same 
file(arch/x86/kernel/crash_dump_64.c). But someone
  thought that these functions were too similar, these functions should 
be merged or
  rebuilt. This is contradictory to the case a.
  2). add a new file for copy_oldmem_page_encrypted(), then which will 
avoid the case '1)'.
  That is just current solution.

   b. If the answer is yes, i have to add an input parameter for 
copy_oldmem_page(), this is back
  to the case A.

Could you give me any advice about this? Any advice will be appreciated.

Thanks.
Lianbo

>>  obj-y   += kprobes/
>>  obj-$(CONFIG_MODULES)   += module.o
>>