RE: Compiling makedumpfile from source

2021-07-27 Thread 萩尾 一仁
-Original Message-
> Hi kazuhito san,
> 
> Just following up on my last email.
> Sincere apologies for asking for your time.
> I want to specifically understand what the "user data" section is and
> what it means to exclude it from the dump.

"user data" are anonymous pages or huge pages.
https://github.com/makedumpfile/makedumpfile/blob/master/makedumpfile.c#L6224

Please consult the __exclude_unnecessary_pages() function above for
what conditions correspond to the type of page.

Thanks,
Kazu

> 
> Thank you very much.
> 
> Best Regards,
> Manty
> 
> On Fri, Jul 9, 2021 at 10:45 AM manty kuma  wrote:
> >
> > Hi Kazuhito san,
> >
> > I am looking to better understand the sections being filtering out
> > with each of the following options.
> >
> > Zero page:
> > Pages that are empty. Ignoring these pages won't have any impact on 
> > analysis.
> >
> > non-private cache and private cache:
> > What exactly are these sections of memory? Just a one-line overview
> > about them is sufficient.
> > (My understanding was that cache is not part of RAM. Is this cache
> > something else? Like some bookkeeping data maintained by the kernel?)
> >
> >
> > user data:
> > Are these sections of the memory for the user space processes/memory
> > sections allocated using malloc?
> > My understanding is that If I exclude this section, gcore would not
> > work. Is my understanding correct?
> > I expected this section to be big. But in fact excluding this did not
> > have much impact on the dump size.
> >
> > free page:
> > unallocated pages. Since they are not allocated. filtering them out
> > won't have any impact on dump analysis.
> > Please correct me if I am wrong.
> >
> >
> > If there is already some place that explains what these sections
> > filter out, please just drop the reference to them and i will look
> > into it.
> > Thank you very much in advance!
> >
> > Manty
> >
> > On Thu, Jul 8, 2021 at 8:17 PM manty kuma  wrote:
> > >
> > > Sorry. I am not sure how but I completely missed this email.
> > > Yes, /tmp was not available in my env. I just did mkdir before
> > > executing `makedumpfile` and it is now working well.
> > > Thank you very much.
> > >
> > > On Mon, Jun 28, 2021 at 4:07 PM HAGIO KAZUHITO(萩尾 一仁)
> > >  wrote:
> > > >
> > > > -Original Message-
> > > > > Hi Kazuhito san,
> > > > >
> > > > > I am getting the following error when trying to use makedumpfile 
> > > > > utility.
> > > > >
> > > > > > copy_vmcoreinfo: Can't open the vmcoreinfo 
> > > > > > file(/tmp/vmcoreinfoLUQc25). No such file or directory.
> > > > > > makedumpfile Failed
> > > > >
> > > > > In your setup how are you providing the vmcoreinfo file? In my case it
> > > > > is checking /tmp/vmcoreinfoLUQc25
> > > > > Who generates this file?
> > > >
> > > > Generally, vmcoreinfo is copied from vmcore's ELF note to 
> > > > /tmp/vmcoreinfoXX
> > > > by makedumpfile, please see copy_vmcoreinfo().  So no need to provide 
> > > > explicitly.
> > > >
> > > > Is there the /tmp directory on your environment?
> > > >
> > > > Kazu
> > > >
> > > >
___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH 00/11] Implement generic prot_guest_has() helper function

2021-07-27 Thread Tom Lendacky
On 7/27/21 5:26 PM, Tom Lendacky wrote:
> This patch series provides a generic helper function, prot_guest_has(),
> to replace the sme_active(), sev_active(), sev_es_active() and
> mem_encrypt_active() functions.
> 
> It is expected that as new protected virtualization technologies are
> added to the kernel, they can all be covered by a single function call
> instead of a collection of specific function calls all called from the
> same locations.
> 
> The powerpc and s390 patches have been compile tested only. Can the
> folks copied on this series verify that nothing breaks for them.

I wanted to get this out before I head out on vacation at the end of the
week. I'll only be out for a week, but I won't be able to respond to any
feedback until I get back.

I'm still not a fan of the name prot_guest_has() because it is used for
some baremetal checks, but really haven't been able to come up with
anything better. So take it with a grain of salt where the sme_active()
calls are replaced by prot_guest_has().

Also, let me know if the treewide changes in patch #7 need to be further
split out by tree.

Thanks,
Tom

> 
> Cc: Andi Kleen 
> Cc: Andy Lutomirski 
> Cc: Ard Biesheuvel 
> Cc: Baoquan He 
> Cc: Benjamin Herrenschmidt 
> Cc: Borislav Petkov 
> Cc: Christian Borntraeger 
> Cc: Daniel Vetter 
> Cc: Dave Hansen 
> Cc: Dave Young 
> Cc: David Airlie 
> Cc: Heiko Carstens 
> Cc: Ingo Molnar 
> Cc: Joerg Roedel 
> Cc: Maarten Lankhorst 
> Cc: Maxime Ripard 
> Cc: Michael Ellerman 
> Cc: Paul Mackerras 
> Cc: Peter Zijlstra 
> Cc: Thomas Gleixner 
> Cc: Thomas Zimmermann 
> Cc: Vasily Gorbik 
> Cc: VMware Graphics 
> Cc: Will Deacon 
> 
> ---
> 
> Patches based on:
>   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master
>   commit 79e920060fa7 ("Merge branch 'WIP/fixes'")
> 
> Tom Lendacky (11):
>   mm: Introduce a function to check for virtualization protection
> features
>   x86/sev: Add an x86 version of prot_guest_has()
>   powerpc/pseries/svm: Add a powerpc version of prot_guest_has()
>   x86/sme: Replace occurrences of sme_active() with prot_guest_has()
>   x86/sev: Replace occurrences of sev_active() with prot_guest_has()
>   x86/sev: Replace occurrences of sev_es_active() with prot_guest_has()
>   treewide: Replace the use of mem_encrypt_active() with
> prot_guest_has()
>   mm: Remove the now unused mem_encrypt_active() function
>   x86/sev: Remove the now unused mem_encrypt_active() function
>   powerpc/pseries/svm: Remove the now unused mem_encrypt_active()
> function
>   s390/mm: Remove the now unused mem_encrypt_active() function
> 
>  arch/Kconfig   |  3 ++
>  arch/powerpc/include/asm/mem_encrypt.h |  5 --
>  arch/powerpc/include/asm/protected_guest.h | 30 +++
>  arch/powerpc/platforms/pseries/Kconfig |  1 +
>  arch/s390/include/asm/mem_encrypt.h|  2 -
>  arch/x86/Kconfig   |  1 +
>  arch/x86/include/asm/kexec.h   |  2 +-
>  arch/x86/include/asm/mem_encrypt.h | 13 +
>  arch/x86/include/asm/protected_guest.h | 27 ++
>  arch/x86/kernel/crash_dump_64.c|  4 +-
>  arch/x86/kernel/head64.c   |  4 +-
>  arch/x86/kernel/kvm.c  |  3 +-
>  arch/x86/kernel/kvmclock.c |  4 +-
>  arch/x86/kernel/machine_kexec_64.c | 19 +++
>  arch/x86/kernel/pci-swiotlb.c  |  9 ++--
>  arch/x86/kernel/relocate_kernel_64.S   |  2 +-
>  arch/x86/kernel/sev.c  |  6 +--
>  arch/x86/kvm/svm/svm.c |  3 +-
>  arch/x86/mm/ioremap.c  | 16 +++---
>  arch/x86/mm/mem_encrypt.c  | 60 +++---
>  arch/x86/mm/mem_encrypt_identity.c |  3 +-
>  arch/x86/mm/pat/set_memory.c   |  3 +-
>  arch/x86/platform/efi/efi_64.c |  9 ++--
>  arch/x86/realmode/init.c   |  8 +--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c|  4 +-
>  drivers/gpu/drm/drm_cache.c|  4 +-
>  drivers/gpu/drm/vmwgfx/vmwgfx_drv.c|  4 +-
>  drivers/gpu/drm/vmwgfx/vmwgfx_msg.c|  6 +--
>  drivers/iommu/amd/init.c   |  7 +--
>  drivers/iommu/amd/iommu.c  |  3 +-
>  drivers/iommu/amd/iommu_v2.c   |  3 +-
>  drivers/iommu/iommu.c  |  3 +-
>  fs/proc/vmcore.c   |  6 +--
>  include/linux/mem_encrypt.h|  4 --
>  include/linux/protected_guest.h| 37 +
>  kernel/dma/swiotlb.c   |  4 +-
>  36 files changed, 218 insertions(+), 104 deletions(-)
>  create mode 100644 arch/powerpc/include/asm/protected_guest.h
>  create mode 100644 arch/x86/include/asm/protected_guest.h
>  create mode 100644 include/linux/protected_guest.h
> 

___
kexec mailing list
kexec@lists.infradead.org

[PATCH 10/11] powerpc/pseries/svm: Remove the now unused mem_encrypt_active() function

2021-07-27 Thread Tom Lendacky
The mem_encrypt_active() function has been replaced by prot_guest_has(),
so remove the implementation.

Cc: Michael Ellerman 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Signed-off-by: Tom Lendacky 
---
 arch/powerpc/include/asm/mem_encrypt.h | 5 -
 1 file changed, 5 deletions(-)

diff --git a/arch/powerpc/include/asm/mem_encrypt.h 
b/arch/powerpc/include/asm/mem_encrypt.h
index ba9dab07c1be..2f26b8fc8d29 100644
--- a/arch/powerpc/include/asm/mem_encrypt.h
+++ b/arch/powerpc/include/asm/mem_encrypt.h
@@ -10,11 +10,6 @@
 
 #include 
 
-static inline bool mem_encrypt_active(void)
-{
-   return is_secure_guest();
-}
-
 static inline bool force_dma_unencrypted(struct device *dev)
 {
return is_secure_guest();
-- 
2.32.0


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH 09/11] x86/sev: Remove the now unused mem_encrypt_active() function

2021-07-27 Thread Tom Lendacky
The mem_encrypt_active() function has been replaced by prot_guest_has(),
so remove the implementation.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h | 5 -
 1 file changed, 5 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 797146e0cd6b..94c089e9ea69 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -97,11 +97,6 @@ static inline void mem_encrypt_free_decrypted_mem(void) { }
 
 extern char __start_bss_decrypted[], __end_bss_decrypted[], 
__start_bss_decrypted_unused[];
 
-static inline bool mem_encrypt_active(void)
-{
-   return sme_me_mask;
-}
-
 static inline u64 sme_get_me_mask(void)
 {
return sme_me_mask;
-- 
2.32.0


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH 08/11] mm: Remove the now unused mem_encrypt_active() function

2021-07-27 Thread Tom Lendacky
The mem_encrypt_active() function has been replaced by prot_guest_has(),
so remove the implementation.

Signed-off-by: Tom Lendacky 
---
 include/linux/mem_encrypt.h | 4 
 1 file changed, 4 deletions(-)

diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index 5c4a18a91f89..ae4526389261 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -16,10 +16,6 @@
 
 #include 
 
-#else  /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
-
-static inline bool mem_encrypt_active(void) { return false; }
-
 #endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
-- 
2.32.0


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH 07/11] treewide: Replace the use of mem_encrypt_active() with prot_guest_has()

2021-07-27 Thread Tom Lendacky
Replace occurrences of mem_encrypt_active() with calls to prot_guest_has()
with the PATTR_MEM_ENCRYPT attribute.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Cc: David Airlie 
Cc: Daniel Vetter 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: Thomas Zimmermann 
Cc: VMware Graphics 
Cc: Joerg Roedel 
Cc: Will Deacon 
Cc: Dave Young 
Cc: Baoquan He 
Signed-off-by: Tom Lendacky 
---
 arch/x86/kernel/head64.c| 4 ++--
 arch/x86/mm/ioremap.c   | 4 ++--
 arch/x86/mm/mem_encrypt.c   | 5 ++---
 arch/x86/mm/pat/set_memory.c| 3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +++-
 drivers/gpu/drm/drm_cache.c | 4 ++--
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 ++--
 drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +++---
 drivers/iommu/amd/iommu.c   | 3 ++-
 drivers/iommu/amd/iommu_v2.c| 3 ++-
 drivers/iommu/iommu.c   | 3 ++-
 fs/proc/vmcore.c| 6 +++---
 kernel/dma/swiotlb.c| 4 ++--
 13 files changed, 29 insertions(+), 24 deletions(-)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index de01903c3735..cafed6456d45 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -19,7 +19,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 
 #include 
@@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
 * there is no need to zero it after changing the memory encryption
 * attribute.
 */
-   if (mem_encrypt_active()) {
+   if (prot_guest_has(PATTR_MEM_ENCRYPT)) {
vaddr = (unsigned long)__start_bss_decrypted;
vaddr_end = (unsigned long)__end_bss_decrypted;
for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 0f2d5ace5986..5e1c1f5cbbe8 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -693,7 +693,7 @@ static bool __init 
early_memremap_is_setup_data(resource_size_t phys_addr,
 bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned long size,
 unsigned long flags)
 {
-   if (!mem_encrypt_active())
+   if (!prot_guest_has(PATTR_MEM_ENCRYPT))
return true;
 
if (flags & MEMREMAP_ENC)
@@ -723,7 +723,7 @@ pgprot_t __init 
early_memremap_pgprot_adjust(resource_size_t phys_addr,
 {
bool encrypted_prot;
 
-   if (!mem_encrypt_active())
+   if (!prot_guest_has(PATTR_MEM_ENCRYPT))
return prot;
 
encrypted_prot = true;
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 451de8e84fce..0f1533dbe81c 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -364,8 +364,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, 
unsigned long size)
 /*
  * SME and SEV are very similar but they are not the same, so there are
  * times that the kernel will need to distinguish between SME and SEV. The
- * sme_active() and sev_active() functions are used for this.  When a
- * distinction isn't needed, the mem_encrypt_active() function can be used.
+ * sme_active() and sev_active() functions are used for this.
  *
  * The trampoline code is a good example for this requirement.  Before
  * paging is activated, SME will access all memory as decrypted, but SEV
@@ -451,7 +450,7 @@ void __init mem_encrypt_free_decrypted_mem(void)
 * The unused memory range was mapped decrypted, change the encryption
 * attribute from decrypted to encrypted before freeing it.
 */
-   if (mem_encrypt_active()) {
+   if (sme_me_mask) {
r = set_memory_encrypted(vaddr, npages);
if (r) {
pr_warn("failed to free unused decrypted pages\n");
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index ad8a5c586a35..6925f2bb4be1 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -18,6 +18,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1986,7 +1987,7 @@ static int __set_memory_enc_dec(unsigned long addr, int 
numpages, bool enc)
int ret;
 
/* Nothing to do if memory encryption is not active */
-   if (!mem_encrypt_active())
+   if (!prot_guest_has(PATTR_MEM_ENCRYPT))
return 0;
 
/* Should not be working on unaligned addresses */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index abb928894eac..8407224717df 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -38,6 +38,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "amdgpu.h"
 #include "amdgpu_irq.h"
@@ -1239,7 +1240,8 @@ static int amdgpu_pci_probe(struct pci_dev *pdev,
 * however, SME requires an 

[PATCH 06/11] x86/sev: Replace occurrences of sev_es_active() with prot_guest_has()

2021-07-27 Thread Tom Lendacky
Replace occurrences of sev_es_active() with the more generic
prot_guest_has() using PATTR_GUEST_PROT_STATE, except for in
arch/x86/kernel/sev*.c and arch/x86/mm/mem_encrypt*.c where PATTR_SEV_ES
will be used. If future support is added for other memory encyrption
techonologies, the use of PATTR_GUEST_PROT_STATE can be updated, as
required, to specifically use PATTR_SEV_ES.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h | 2 --
 arch/x86/kernel/sev.c  | 6 +++---
 arch/x86/mm/mem_encrypt.c  | 7 +++
 arch/x86/realmode/init.c   | 3 +--
 4 files changed, 7 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 7e25de37c148..797146e0cd6b 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -50,7 +50,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
 void __init mem_encrypt_init(void);
 
 void __init sev_es_init_vc_handling(void);
-bool sev_es_active(void);
 bool amd_prot_guest_has(unsigned int attr);
 
 #define __bss_decrypted __section(".bss..decrypted")
@@ -74,7 +73,6 @@ static inline void __init sme_encrypt_kernel(struct 
boot_params *bp) { }
 static inline void __init sme_enable(struct boot_params *bp) { }
 
 static inline void sev_es_init_vc_handling(void) { }
-static inline bool sev_es_active(void) { return false; }
 static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
 
 static inline int __init
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index a6895e440bc3..66a4ab9d95d7 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -11,7 +11,7 @@
 
 #include  /* For show_regs() */
 #include 
-#include 
+#include 
 #include 
 #include 
 #include 
@@ -615,7 +615,7 @@ int __init sev_es_efi_map_ghcbs(pgd_t *pgd)
int cpu;
u64 pfn;
 
-   if (!sev_es_active())
+   if (!prot_guest_has(PATTR_SEV_ES))
return 0;
 
pflags = _PAGE_NX | _PAGE_RW;
@@ -774,7 +774,7 @@ void __init sev_es_init_vc_handling(void)
 
BUILD_BUG_ON(offsetof(struct sev_es_runtime_data, ghcb_page) % 
PAGE_SIZE);
 
-   if (!sev_es_active())
+   if (!prot_guest_has(PATTR_SEV_ES))
return;
 
if (!sev_es_check_cpu_features())
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index eb5cae93b238..451de8e84fce 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -383,8 +383,7 @@ static bool sme_active(void)
return sme_me_mask && !sev_active();
 }
 
-/* Needs to be called from non-instrumentable code */
-bool noinstr sev_es_active(void)
+static bool sev_es_active(void)
 {
return sev_status & MSR_AMD64_SEV_ES_ENABLED;
 }
@@ -482,7 +481,7 @@ static void print_mem_encrypt_feature_info(void)
pr_cont(" SEV");
 
/* Encrypted Register State */
-   if (sev_es_active())
+   if (amd_prot_guest_has(PATTR_SEV_ES))
pr_cont(" SEV-ES");
 
pr_cont("\n");
@@ -501,7 +500,7 @@ void __init mem_encrypt_init(void)
 * With SEV, we need to unroll the rep string I/O instructions,
 * but SEV-ES supports them through the #VC handler.
 */
-   if (amd_prot_guest_has(PATTR_SEV) && !sev_es_active())
+   if (amd_prot_guest_has(PATTR_SEV) && !amd_prot_guest_has(PATTR_SEV_ES))
static_branch_enable(_enable_key);
 
print_mem_encrypt_feature_info();
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index 2109ae569c67..7711d0071f41 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -2,7 +2,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 
@@ -48,7 +47,7 @@ static void sme_sev_setup_real_mode(struct trampoline_header 
*th)
if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT))
th->flags |= TH_FLAGS_SME_ACTIVE;
 
-   if (sev_es_active()) {
+   if (prot_guest_has(PATTR_GUEST_PROT_STATE)) {
/*
 * Skip the call to verify_cpu() in secondary_startup_64 as it
 * will cause #VC exceptions when the AP can't handle them yet.
-- 
2.32.0


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH 05/11] x86/sev: Replace occurrences of sev_active() with prot_guest_has()

2021-07-27 Thread Tom Lendacky
Replace occurrences of sev_active() with the more generic prot_guest_has()
using PATTR_GUEST_MEM_ENCRYPT, except for in arch/x86/mm/mem_encrypt*.c
where PATTR_SEV will be used. If future support is added for other memory
encryption technologies, the use of PATTR_GUEST_MEM_ENCRYPT can be
updated, as required, to use PATTR_SEV.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Cc: Ard Biesheuvel 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h |  2 --
 arch/x86/kernel/crash_dump_64.c|  4 +++-
 arch/x86/kernel/kvm.c  |  3 ++-
 arch/x86/kernel/kvmclock.c |  4 ++--
 arch/x86/kernel/machine_kexec_64.c | 16 
 arch/x86/kvm/svm/svm.c |  3 ++-
 arch/x86/mm/ioremap.c  |  6 +++---
 arch/x86/mm/mem_encrypt.c  | 15 +++
 arch/x86/platform/efi/efi_64.c |  9 +
 9 files changed, 32 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 956338406cec..7e25de37c148 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -50,7 +50,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
 void __init mem_encrypt_init(void);
 
 void __init sev_es_init_vc_handling(void);
-bool sev_active(void);
 bool sev_es_active(void);
 bool amd_prot_guest_has(unsigned int attr);
 
@@ -75,7 +74,6 @@ static inline void __init sme_encrypt_kernel(struct 
boot_params *bp) { }
 static inline void __init sme_enable(struct boot_params *bp) { }
 
 static inline void sev_es_init_vc_handling(void) { }
-static inline bool sev_active(void) { return false; }
 static inline bool sev_es_active(void) { return false; }
 static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
 
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 045e82e8945b..0cfe35f03e67 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -10,6 +10,7 @@
 #include 
 #include 
 #include 
+#include 
 
 static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
  unsigned long offset, int userbuf,
@@ -73,5 +74,6 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char 
*buf, size_t csize,
 
 ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
 {
-   return read_from_oldmem(buf, count, ppos, 0, sev_active());
+   return read_from_oldmem(buf, count, ppos, 0,
+   prot_guest_has(PATTR_GUEST_MEM_ENCRYPT));
 }
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index a26643dc6bd6..9d08ad2f3faa 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -27,6 +27,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -418,7 +419,7 @@ static void __init sev_map_percpu_data(void)
 {
int cpu;
 
-   if (!sev_active())
+   if (!prot_guest_has(PATTR_GUEST_MEM_ENCRYPT))
return;
 
for_each_possible_cpu(cpu) {
diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index ad273e5861c1..f7ba78a23dcd 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -16,9 +16,9 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
-#include 
 #include 
 #include 
 
@@ -232,7 +232,7 @@ static void __init kvmclock_init_mem(void)
 * hvclock is shared between the guest and the hypervisor, must
 * be mapped decrypted.
 */
-   if (sev_active()) {
+   if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) {
r = set_memory_decrypted((unsigned long) hvclock_mem,
 1UL << order);
if (r) {
diff --git a/arch/x86/kernel/machine_kexec_64.c 
b/arch/x86/kernel/machine_kexec_64.c
index 8e7b517ad738..66ff788b79c9 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -167,7 +167,7 @@ static int init_transition_pgtable(struct kimage *image, 
pgd_t *pgd)
}
pte = pte_offset_kernel(pmd, vaddr);
 
-   if (sev_active())
+   if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT))
prot = PAGE_KERNEL_EXEC;
 
set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, prot));
@@ -207,7 +207,7 @@ static int init_pgtable(struct kimage *image, unsigned long 
start_pgtable)
level4p = (pgd_t *)__va(start_pgtable);
clear_page(level4p);
 
-   if (sev_active()) {
+   if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) {
info.page_flag   |= _PAGE_ENC;
info.kernpg_flag |= _PAGE_ENC;
}
@@ -570,12 +570,12 @@ void arch_kexec_unprotect_crashkres(void)
  */
 int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp)
 {
-   if (sev_active())
+   if (!prot_guest_has(PATTR_HOST_MEM_ENCRYPT))
return 0;
 
/*
-* If SME is active we 

[PATCH 04/11] x86/sme: Replace occurrences of sme_active() with prot_guest_has()

2021-07-27 Thread Tom Lendacky
Replace occurrences of sme_active() with the more generic prot_guest_has()
using PATTR_HOST_MEM_ENCRYPT, except for in arch/x86/mm/mem_encrypt*.c
where PATTR_SME will be used. If future support is added for other memory
encryption technologies, the use of PATTR_HOST_MEM_ENCRYPT can be
updated, as required, to use PATTR_SME.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Cc: Joerg Roedel 
Cc: Will Deacon 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/kexec.h |  2 +-
 arch/x86/include/asm/mem_encrypt.h   |  2 --
 arch/x86/kernel/machine_kexec_64.c   |  3 ++-
 arch/x86/kernel/pci-swiotlb.c|  9 -
 arch/x86/kernel/relocate_kernel_64.S |  2 +-
 arch/x86/mm/ioremap.c|  6 +++---
 arch/x86/mm/mem_encrypt.c| 10 +-
 arch/x86/mm/mem_encrypt_identity.c   |  3 ++-
 arch/x86/realmode/init.c |  5 +++--
 drivers/iommu/amd/init.c |  7 ---
 10 files changed, 25 insertions(+), 24 deletions(-)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 0a6e34b07017..11b7c06e2828 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -129,7 +129,7 @@ relocate_kernel(unsigned long indirection_page,
unsigned long page_list,
unsigned long start_address,
unsigned int preserve_context,
-   unsigned int sme_active);
+   unsigned int host_mem_enc_active);
 #endif
 
 #define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index a46d47662772..956338406cec 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -50,7 +50,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
 void __init mem_encrypt_init(void);
 
 void __init sev_es_init_vc_handling(void);
-bool sme_active(void);
 bool sev_active(void);
 bool sev_es_active(void);
 bool amd_prot_guest_has(unsigned int attr);
@@ -76,7 +75,6 @@ static inline void __init sme_encrypt_kernel(struct 
boot_params *bp) { }
 static inline void __init sme_enable(struct boot_params *bp) { }
 
 static inline void sev_es_init_vc_handling(void) { }
-static inline bool sme_active(void) { return false; }
 static inline bool sev_active(void) { return false; }
 static inline bool sev_es_active(void) { return false; }
 static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
diff --git a/arch/x86/kernel/machine_kexec_64.c 
b/arch/x86/kernel/machine_kexec_64.c
index 131f30fdcfbd..8e7b517ad738 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -17,6 +17,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -358,7 +359,7 @@ void machine_kexec(struct kimage *image)
   (unsigned long)page_list,
   image->start,
   image->preserve_context,
-  sme_active());
+  prot_guest_has(PATTR_HOST_MEM_ENCRYPT));
 
 #ifdef CONFIG_KEXEC_JUMP
if (image->preserve_context)
diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
index c2cfa5e7c152..bd9a9cfbc9a2 100644
--- a/arch/x86/kernel/pci-swiotlb.c
+++ b/arch/x86/kernel/pci-swiotlb.c
@@ -6,7 +6,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 
 #include 
 #include 
@@ -45,11 +45,10 @@ int __init pci_swiotlb_detect_4gb(void)
swiotlb = 1;
 
/*
-* If SME is active then swiotlb will be set to 1 so that bounce
-* buffers are allocated and used for devices that do not support
-* the addressing range required for the encryption mask.
+* Set swiotlb to 1 so that bounce buffers are allocated and used for
+* devices that can't support DMA to encrypted memory.
 */
-   if (sme_active())
+   if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT))
swiotlb = 1;
 
return swiotlb;
diff --git a/arch/x86/kernel/relocate_kernel_64.S 
b/arch/x86/kernel/relocate_kernel_64.S
index c53271aebb64..c8fe74a28143 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -47,7 +47,7 @@ SYM_CODE_START_NOALIGN(relocate_kernel)
 * %rsi page_list
 * %rdx start address
 * %rcx preserve_context
-* %r8  sme_active
+* %r8  host_mem_enc_active
 */
 
/* Save the CPU context, used for jumping back */
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 60ade7dd71bd..f899f02c0241 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -14,7 +14,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 #include 
 
@@ -702,7 +702,7 @@ bool arch_memremap_can_ram_remap(resource_size_t phys_addr, 
unsigned long size,
if (flags & 

[PATCH 11/11] s390/mm: Remove the now unused mem_encrypt_active() function

2021-07-27 Thread Tom Lendacky
The mem_encrypt_active() function has been replaced by prot_guest_has(),
so remove the implementation. Since the default implementation of the
prot_guest_has() matches the s390 implementation of mem_encrypt_active(),
prot_guest_has() does not need to be implemented in s390 (the config
option ARCH_HAS_PROTECTED_GUEST is not set).

Cc: Heiko Carstens 
Cc: Vasily Gorbik 
Cc: Christian Borntraeger 
Signed-off-by: Tom Lendacky 
---
 arch/s390/include/asm/mem_encrypt.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/s390/include/asm/mem_encrypt.h 
b/arch/s390/include/asm/mem_encrypt.h
index 2542cbf7e2d1..08a8b96606d7 100644
--- a/arch/s390/include/asm/mem_encrypt.h
+++ b/arch/s390/include/asm/mem_encrypt.h
@@ -4,8 +4,6 @@
 
 #ifndef __ASSEMBLY__
 
-static inline bool mem_encrypt_active(void) { return false; }
-
 int set_memory_encrypted(unsigned long addr, int numpages);
 int set_memory_decrypted(unsigned long addr, int numpages);
 
-- 
2.32.0


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH 03/11] powerpc/pseries/svm: Add a powerpc version of prot_guest_has()

2021-07-27 Thread Tom Lendacky
Introduce a powerpc version of the prot_guest_has() function. This will
be used to replace the powerpc mem_encrypt_active() implementation, so
the implementation will initially only support the PATTR_MEM_ENCRYPT
attribute.

Cc: Michael Ellerman 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Signed-off-by: Tom Lendacky 
---
 arch/powerpc/include/asm/protected_guest.h | 30 ++
 arch/powerpc/platforms/pseries/Kconfig |  1 +
 2 files changed, 31 insertions(+)
 create mode 100644 arch/powerpc/include/asm/protected_guest.h

diff --git a/arch/powerpc/include/asm/protected_guest.h 
b/arch/powerpc/include/asm/protected_guest.h
new file mode 100644
index ..ce55c2c7e534
--- /dev/null
+++ b/arch/powerpc/include/asm/protected_guest.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Protected Guest (and Host) Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#ifndef _POWERPC_PROTECTED_GUEST_H
+#define _POWERPC_PROTECTED_GUEST_H
+
+#include 
+
+#ifndef __ASSEMBLY__
+
+static inline bool prot_guest_has(unsigned int attr)
+{
+   switch (attr) {
+   case PATTR_MEM_ENCRYPT:
+   return is_secure_guest();
+
+   default:
+   return false;
+   }
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _POWERPC_PROTECTED_GUEST_H */
diff --git a/arch/powerpc/platforms/pseries/Kconfig 
b/arch/powerpc/platforms/pseries/Kconfig
index 5e037df2a3a1..8ce5417d6feb 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -159,6 +159,7 @@ config PPC_SVM
select SWIOTLB
select ARCH_HAS_MEM_ENCRYPT
select ARCH_HAS_FORCE_DMA_UNENCRYPTED
+   select ARCH_HAS_PROTECTED_GUEST
help
 There are certain POWER platforms which support secure guests using
 the Protected Execution Facility, with the help of an Ultravisor
-- 
2.32.0


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH 02/11] x86/sev: Add an x86 version of prot_guest_has()

2021-07-27 Thread Tom Lendacky
Introduce an x86 version of the prot_guest_has() function. This will be
used in the more generic x86 code to replace vendor specific calls like
sev_active(), etc.

While the name suggests this is intended mainly for guests, it will
also be used for host memory encryption checks in place of sme_active().

The amd_prot_guest_has() function does not use EXPORT_SYMBOL_GPL for the
same reasons previously stated when changing sme_active(), sev_active and
sme_me_mask to EXPORT_SYBMOL:
  commit 87df26175e67 ("x86/mm: Unbreak modules that rely on external 
PAGE_KERNEL availability")
  commit 9d5f38ba6c82 ("x86/mm: Unbreak modules that use the DMA API")

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Co-developed-by: Andi Kleen 
Signed-off-by: Andi Kleen 
Co-developed-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Tom Lendacky 
---
 arch/x86/Kconfig   |  1 +
 arch/x86/include/asm/mem_encrypt.h |  2 ++
 arch/x86/include/asm/protected_guest.h | 27 ++
 arch/x86/mm/mem_encrypt.c  | 25 
 include/linux/protected_guest.h|  5 +
 5 files changed, 60 insertions(+)
 create mode 100644 arch/x86/include/asm/protected_guest.h

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 49270655e827..e47213cbfc55 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1514,6 +1514,7 @@ config AMD_MEM_ENCRYPT
select ARCH_HAS_FORCE_DMA_UNENCRYPTED
select INSTRUCTION_DECODER
select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
+   select ARCH_HAS_PROTECTED_GUEST
help
  Say yes to enable support for the encryption of system memory.
  This requires an AMD processor that supports Secure Memory
diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 9c80c68d75b5..a46d47662772 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -53,6 +53,7 @@ void __init sev_es_init_vc_handling(void);
 bool sme_active(void);
 bool sev_active(void);
 bool sev_es_active(void);
+bool amd_prot_guest_has(unsigned int attr);
 
 #define __bss_decrypted __section(".bss..decrypted")
 
@@ -78,6 +79,7 @@ static inline void sev_es_init_vc_handling(void) { }
 static inline bool sme_active(void) { return false; }
 static inline bool sev_active(void) { return false; }
 static inline bool sev_es_active(void) { return false; }
+static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
 
 static inline int __init
 early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 
0; }
diff --git a/arch/x86/include/asm/protected_guest.h 
b/arch/x86/include/asm/protected_guest.h
new file mode 100644
index ..b4a267dddf93
--- /dev/null
+++ b/arch/x86/include/asm/protected_guest.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Protected Guest (and Host) Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#ifndef _X86_PROTECTED_GUEST_H
+#define _X86_PROTECTED_GUEST_H
+
+#include 
+
+#ifndef __ASSEMBLY__
+
+static inline bool prot_guest_has(unsigned int attr)
+{
+   if (sme_me_mask)
+   return amd_prot_guest_has(attr);
+
+   return false;
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _X86_PROTECTED_GUEST_H */
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ff08dc463634..7d3b2c6f5f88 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -389,6 +390,30 @@ bool noinstr sev_es_active(void)
return sev_status & MSR_AMD64_SEV_ES_ENABLED;
 }
 
+bool amd_prot_guest_has(unsigned int attr)
+{
+   switch (attr) {
+   case PATTR_MEM_ENCRYPT:
+   return sme_me_mask != 0;
+
+   case PATTR_SME:
+   case PATTR_HOST_MEM_ENCRYPT:
+   return sme_active();
+
+   case PATTR_SEV:
+   case PATTR_GUEST_MEM_ENCRYPT:
+   return sev_active();
+
+   case PATTR_SEV_ES:
+   case PATTR_GUEST_PROT_STATE:
+   return sev_es_active();
+
+   default:
+   return false;
+   }
+}
+EXPORT_SYMBOL(amd_prot_guest_has);
+
 /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
 bool force_dma_unencrypted(struct device *dev)
 {
diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h
index f8ed7b72967b..7a7120abbb62 100644
--- a/include/linux/protected_guest.h
+++ b/include/linux/protected_guest.h
@@ -17,6 +17,11 @@
 #define PATTR_GUEST_MEM_ENCRYPT2   /* Guest encrypted 
memory */
 #define PATTR_GUEST_PROT_STATE 3   /* Guest encrypted state */
 
+/* 0x800 - 0x8ff reserved for AMD */
+#define PATTR_SME  0x800
+#define PATTR_SEV

[PATCH 01/11] mm: Introduce a function to check for virtualization protection features

2021-07-27 Thread Tom Lendacky
In prep for other protected virtualization technologies, introduce a
generic helper function, prot_guest_has(), that can be used to check
for specific protection attributes, like memory encryption. This is
intended to eliminate having to add multiple technology-specific checks
to the code (e.g. if (sev_active() || tdx_active())).

Co-developed-by: Andi Kleen 
Signed-off-by: Andi Kleen 
Co-developed-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Tom Lendacky 
---
 arch/Kconfig|  3 +++
 include/linux/protected_guest.h | 32 
 2 files changed, 35 insertions(+)
 create mode 100644 include/linux/protected_guest.h

diff --git a/arch/Kconfig b/arch/Kconfig
index 129df498a8e1..a47cf283f2ff 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1231,6 +1231,9 @@ config RELR
 config ARCH_HAS_MEM_ENCRYPT
bool
 
+config ARCH_HAS_PROTECTED_GUEST
+   bool
+
 config HAVE_SPARSE_SYSCALL_NR
bool
help
diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h
new file mode 100644
index ..f8ed7b72967b
--- /dev/null
+++ b/include/linux/protected_guest.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Protected Guest (and Host) Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#ifndef _PROTECTED_GUEST_H
+#define _PROTECTED_GUEST_H
+
+#ifndef __ASSEMBLY__
+
+#define PATTR_MEM_ENCRYPT  0   /* Encrypted memory */
+#define PATTR_HOST_MEM_ENCRYPT 1   /* Host encrypted memory */
+#define PATTR_GUEST_MEM_ENCRYPT2   /* Guest encrypted 
memory */
+#define PATTR_GUEST_PROT_STATE 3   /* Guest encrypted state */
+
+#ifdef CONFIG_ARCH_HAS_PROTECTED_GUEST
+
+#include 
+
+#else  /* !CONFIG_ARCH_HAS_PROTECTED_GUEST */
+
+static inline bool prot_guest_has(unsigned int attr) { return false; }
+
+#endif /* CONFIG_ARCH_HAS_PROTECTED_GUEST */
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _PROTECTED_GUEST_H */
-- 
2.32.0


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH 00/11] Implement generic prot_guest_has() helper function

2021-07-27 Thread Tom Lendacky
This patch series provides a generic helper function, prot_guest_has(),
to replace the sme_active(), sev_active(), sev_es_active() and
mem_encrypt_active() functions.

It is expected that as new protected virtualization technologies are
added to the kernel, they can all be covered by a single function call
instead of a collection of specific function calls all called from the
same locations.

The powerpc and s390 patches have been compile tested only. Can the
folks copied on this series verify that nothing breaks for them.

Cc: Andi Kleen 
Cc: Andy Lutomirski 
Cc: Ard Biesheuvel 
Cc: Baoquan He 
Cc: Benjamin Herrenschmidt 
Cc: Borislav Petkov 
Cc: Christian Borntraeger 
Cc: Daniel Vetter 
Cc: Dave Hansen 
Cc: Dave Young 
Cc: David Airlie 
Cc: Heiko Carstens 
Cc: Ingo Molnar 
Cc: Joerg Roedel 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: Michael Ellerman 
Cc: Paul Mackerras 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Thomas Zimmermann 
Cc: Vasily Gorbik 
Cc: VMware Graphics 
Cc: Will Deacon 

---

Patches based on:
  https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master
  commit 79e920060fa7 ("Merge branch 'WIP/fixes'")

Tom Lendacky (11):
  mm: Introduce a function to check for virtualization protection
features
  x86/sev: Add an x86 version of prot_guest_has()
  powerpc/pseries/svm: Add a powerpc version of prot_guest_has()
  x86/sme: Replace occurrences of sme_active() with prot_guest_has()
  x86/sev: Replace occurrences of sev_active() with prot_guest_has()
  x86/sev: Replace occurrences of sev_es_active() with prot_guest_has()
  treewide: Replace the use of mem_encrypt_active() with
prot_guest_has()
  mm: Remove the now unused mem_encrypt_active() function
  x86/sev: Remove the now unused mem_encrypt_active() function
  powerpc/pseries/svm: Remove the now unused mem_encrypt_active()
function
  s390/mm: Remove the now unused mem_encrypt_active() function

 arch/Kconfig   |  3 ++
 arch/powerpc/include/asm/mem_encrypt.h |  5 --
 arch/powerpc/include/asm/protected_guest.h | 30 +++
 arch/powerpc/platforms/pseries/Kconfig |  1 +
 arch/s390/include/asm/mem_encrypt.h|  2 -
 arch/x86/Kconfig   |  1 +
 arch/x86/include/asm/kexec.h   |  2 +-
 arch/x86/include/asm/mem_encrypt.h | 13 +
 arch/x86/include/asm/protected_guest.h | 27 ++
 arch/x86/kernel/crash_dump_64.c|  4 +-
 arch/x86/kernel/head64.c   |  4 +-
 arch/x86/kernel/kvm.c  |  3 +-
 arch/x86/kernel/kvmclock.c |  4 +-
 arch/x86/kernel/machine_kexec_64.c | 19 +++
 arch/x86/kernel/pci-swiotlb.c  |  9 ++--
 arch/x86/kernel/relocate_kernel_64.S   |  2 +-
 arch/x86/kernel/sev.c  |  6 +--
 arch/x86/kvm/svm/svm.c |  3 +-
 arch/x86/mm/ioremap.c  | 16 +++---
 arch/x86/mm/mem_encrypt.c  | 60 +++---
 arch/x86/mm/mem_encrypt_identity.c |  3 +-
 arch/x86/mm/pat/set_memory.c   |  3 +-
 arch/x86/platform/efi/efi_64.c |  9 ++--
 arch/x86/realmode/init.c   |  8 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c|  4 +-
 drivers/gpu/drm/drm_cache.c|  4 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c|  4 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_msg.c|  6 +--
 drivers/iommu/amd/init.c   |  7 +--
 drivers/iommu/amd/iommu.c  |  3 +-
 drivers/iommu/amd/iommu_v2.c   |  3 +-
 drivers/iommu/iommu.c  |  3 +-
 fs/proc/vmcore.c   |  6 +--
 include/linux/mem_encrypt.h|  4 --
 include/linux/protected_guest.h| 37 +
 kernel/dma/swiotlb.c   |  4 +-
 36 files changed, 218 insertions(+), 104 deletions(-)
 create mode 100644 arch/powerpc/include/asm/protected_guest.h
 create mode 100644 arch/x86/include/asm/protected_guest.h
 create mode 100644 include/linux/protected_guest.h

-- 
2.32.0


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH printk v4 0/6] printk: remove safe buffers

2021-07-27 Thread Petr Mladek
On Thu 2021-07-15 21:39:53, John Ogness wrote:
> Hi,
> 
> Here is v4 of a series to remove the safe buffers. v3 can be
> found here [0]. The safe buffers are no longer needed because
> messages can be stored directly into the log buffer from any
> context.
> 
> However, the safe buffers also provided a form of recursion
> protection. For that reason, explicit recursion protection is
> implemented for this series.
> 
> The safe buffers also implicitly provided serialization
> between multiple CPUs executing in NMI context. This was
> particularly necessary for the nmi_backtrace() output. This
> serializiation is now preserved by using the printk cpulock.
> 
> With the removal of the safe buffers, there is no need for
> extra NMI enter/exit tracking. So this is also removed
> (which includes removing the config option CONFIG_PRINTK_NMI).
> 
> And finally, there are a few places in the kernel that need to
> specify code blocks where all printk calls are to be deferred
> printing. Previously the NMI tracking API was being (mis)used
> for this purpose. This series introduces an official and
> explicit interface for such cases. (Note that all deferred
> printing will be removed anyway, once printing kthreads are
> introduced.)
> 
> John Ogness (6):
>   lib/nmi_backtrace: explicitly serialize banner and regs
>   printk: track/limit recursion
>   printk: remove safe buffers
>   printk: remove NMI tracking
>   printk: convert @syslog_lock to mutex
>   printk: syslog: close window between wait and read

The entire patchset has been committed into printk/linux.git,
branch rework/printk_safe-removal.

Note that I have updated the 4th patch as discussed, see
https://lore.kernel.org/r/20210721120026.y3dqno24ahw4s...@pathway.suse.cz
https://lore.kernel.org/r/20210721130852.zrjnti6b3fwjg...@pathway.suse.cz

Best Regards,
Petr

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: Compiling makedumpfile from source

2021-07-27 Thread manty kuma
Hi kazuhito san,

Just following up on my last email.
Sincere apologies for asking for your time.
I want to specifically understand what the "user data" section is and
what it means to exclude it from the dump.

Thank you very much.

Best Regards,
Manty

On Fri, Jul 9, 2021 at 10:45 AM manty kuma  wrote:
>
> Hi Kazuhito san,
>
> I am looking to better understand the sections being filtering out
> with each of the following options.
>
> Zero page:
> Pages that are empty. Ignoring these pages won't have any impact on analysis.
>
> non-private cache and private cache:
> What exactly are these sections of memory? Just a one-line overview
> about them is sufficient.
> (My understanding was that cache is not part of RAM. Is this cache
> something else? Like some bookkeeping data maintained by the kernel?)
>
>
> user data:
> Are these sections of the memory for the user space processes/memory
> sections allocated using malloc?
> My understanding is that If I exclude this section, gcore would not
> work. Is my understanding correct?
> I expected this section to be big. But in fact excluding this did not
> have much impact on the dump size.
>
> free page:
> unallocated pages. Since they are not allocated. filtering them out
> won't have any impact on dump analysis.
> Please correct me if I am wrong.
>
>
> If there is already some place that explains what these sections
> filter out, please just drop the reference to them and i will look
> into it.
> Thank you very much in advance!
>
> Manty
>
> On Thu, Jul 8, 2021 at 8:17 PM manty kuma  wrote:
> >
> > Sorry. I am not sure how but I completely missed this email.
> > Yes, /tmp was not available in my env. I just did mkdir before
> > executing `makedumpfile` and it is now working well.
> > Thank you very much.
> >
> > On Mon, Jun 28, 2021 at 4:07 PM HAGIO KAZUHITO(萩尾 一仁)
> >  wrote:
> > >
> > > -Original Message-
> > > > Hi Kazuhito san,
> > > >
> > > > I am getting the following error when trying to use makedumpfile 
> > > > utility.
> > > >
> > > > > copy_vmcoreinfo: Can't open the vmcoreinfo 
> > > > > file(/tmp/vmcoreinfoLUQc25). No such file or directory.
> > > > > makedumpfile Failed
> > > >
> > > > In your setup how are you providing the vmcoreinfo file? In my case it
> > > > is checking /tmp/vmcoreinfoLUQc25
> > > > Who generates this file?
> > >
> > > Generally, vmcoreinfo is copied from vmcore's ELF note to 
> > > /tmp/vmcoreinfoXX
> > > by makedumpfile, please see copy_vmcoreinfo().  So no need to provide 
> > > explicitly.
> > >
> > > Is there the /tmp directory on your environment?
> > >
> > > Kazu
> > >
> > >

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec