Re: [PATCH 1/1] crypto: ccp - Allocate TEE ring and cmd buffer using DMA APIs

2022-12-12 Thread Tom Lendacky

On 12/12/22 05:21, Rijo Thomas wrote:

On 12/10/2022 2:31 AM, Tom Lendacky wrote:

On 12/6/22 06:30, Rijo Thomas wrote:

For AMD Secure Processor (ASP) to map and access TEE ring buffer, the
ring buffer address sent by host to ASP must be a real physical
address and the pages must be physically contiguous.

In a virtualized environment though, when the driver is running in a
guest VM, the pages allocated by __get_free_pages() may not be
contiguous in the host (or machine) physical address space. Guests
will see a guest (or pseudo) physical address and not the actual host
(or machine) physical address. The TEE running on ASP cannot decipher
pseudo physical addresses. It needs host or machine physical address.

To resolve this problem, use DMA APIs for allocating buffers that must
be shared with TEE. This will ensure that the pages are contiguous in
host (or machine) address space. If the DMA handle is an IOVA,
translate it into a physical address before sending it to ASP.

This patch also exports two APIs (one for buffer allocation and
another to free the buffer). This API can be used by AMD-TEE driver to
share buffers with TEE.

Fixes: 33960acccfbd ("crypto: ccp - add TEE support for Raven Ridge")
Cc: Tom Lendacky 
Cc: sta...@vger.kernel.org
Signed-off-by: Rijo Thomas 
Co-developed-by: Jeshwanth 
Signed-off-by: Jeshwanth 
Reviewed-by: Devaraj Rangasamy 
---
   drivers/crypto/ccp/psp-dev.c |   6 +-
   drivers/crypto/ccp/tee-dev.c | 116 ++-
   drivers/crypto/ccp/tee-dev.h |   9 +--
   include/linux/psp-tee.h  |  47 ++
   4 files changed, 127 insertions(+), 51 deletions(-)

diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
index c9c741ac8442..2b86158d7435 100644
--- a/drivers/crypto/ccp/psp-dev.c
+++ b/drivers/crypto/ccp/psp-dev.c
@@ -161,13 +161,13 @@ int psp_dev_init(struct sp_device *sp)
   goto e_err;
   }
   +    if (sp->set_psp_master_device)
+    sp->set_psp_master_device(sp);
+


This worries me a bit...  if psp_init() fails, it may still be referenced as 
the master device. What's the reason for moving it?


Hmm. Okay, I see your point.

In psp_tee_alloc_dmabuf(), we call psp_get_master_device(). Without above 
change, psp_get_master_device() returns NULL.

I think in psp_dev_init(), we can add below error handling:

ret = psp_init(psp);
 if (ret)
 goto e_init;
  ...

e_init:
 if (sp->clear_psp_master_device)
 sp->clear_psp_master_device(sp);

Will this help address your concern?


Yes, that works.






   ret = psp_init(psp);
   if (ret)
   goto e_irq;
   -    if (sp->set_psp_master_device)
-    sp->set_psp_master_device(sp);
-
   /* Enable interrupt */
   iowrite32(-1, psp->io_regs + psp->vdata->inten_reg);
   diff --git a/drivers/crypto/ccp/tee-dev.c b/drivers/crypto/ccp/tee-dev.c
index 5c9d47f3be37..1631d9851e54 100644
--- a/drivers/crypto/ccp/tee-dev.c
+++ b/drivers/crypto/ccp/tee-dev.c
@@ -12,8 +12,9 @@
   #include 
   #include 
   #include 
+#include 
+#include 
   #include 
-#include 
   #include 
     #include "psp-dev.h"
@@ -21,25 +22,64 @@
     static bool psp_dead;
   +struct dma_buffer *psp_tee_alloc_dmabuf(unsigned long size, gfp_t gfp)


It looks like both calls to this use the same gfp_t values, you can probably 
eliminate from the call and just specify them in here.



Okay, I will remove gfp_t flag from the function argument.


+{
+    struct psp_device *psp = psp_get_master_device();
+    struct dma_buffer *dma_buf;
+    struct iommu_domain *dom;
+
+    if (!psp || !size)
+    return NULL;
+
+    dma_buf = kzalloc(sizeof(*dma_buf), GFP_KERNEL);
+    if (!dma_buf)
+    return NULL;
+
+    dma_buf->vaddr = dma_alloc_coherent(psp->dev, size, _buf->dma, gfp);
+    if (!dma_buf->vaddr || !dma_buf->dma) {


I don't think you can have one of these be NULL without both being NULL, but I 
guess it doesn't hurt to check.



Okay, we will keep both checks for now.


+    kfree(dma_buf);
+    return NULL;
+    }
+
+    dma_buf->size = size;
+ > +    dom = iommu_get_domain_for_dev(psp->dev);
+    if (dom)


You need a nice comment above this all explaining that. I guess you're using 
the presence of a domain to determine whether you're running on bare-metal vs 
within a hypervisor? I'm not sure what will happen if the guest ever gets an 
emulated IOMMU...


Sure we will add a comment.

We are not trying to determive bare-metal vs hypervisor here, but determine 
whether DMA handle returned by dma_alloc_coherent() is an IOVA or not.
If the address is an IOVA, we convert IOVA to physical address using 
iommu_iova_to_phys(). This was our intention.


Ok, but if a VM gets an emulated IOMMU and ends up with an IOVA, that IOVA 
points to a GPA - and if the size is over a page, then you aren't 
guaranteed to have contiguous physical memory.


Probably not 

Re: [PATCH 1/1] crypto: ccp - Allocate TEE ring and cmd buffer using DMA APIs

2022-12-09 Thread Tom Lendacky

On 12/6/22 06:30, Rijo Thomas wrote:

For AMD Secure Processor (ASP) to map and access TEE ring buffer, the
ring buffer address sent by host to ASP must be a real physical
address and the pages must be physically contiguous.

In a virtualized environment though, when the driver is running in a
guest VM, the pages allocated by __get_free_pages() may not be
contiguous in the host (or machine) physical address space. Guests
will see a guest (or pseudo) physical address and not the actual host
(or machine) physical address. The TEE running on ASP cannot decipher
pseudo physical addresses. It needs host or machine physical address.

To resolve this problem, use DMA APIs for allocating buffers that must
be shared with TEE. This will ensure that the pages are contiguous in
host (or machine) address space. If the DMA handle is an IOVA,
translate it into a physical address before sending it to ASP.

This patch also exports two APIs (one for buffer allocation and
another to free the buffer). This API can be used by AMD-TEE driver to
share buffers with TEE.

Fixes: 33960acccfbd ("crypto: ccp - add TEE support for Raven Ridge")
Cc: Tom Lendacky 
Cc: sta...@vger.kernel.org
Signed-off-by: Rijo Thomas 
Co-developed-by: Jeshwanth 
Signed-off-by: Jeshwanth 
Reviewed-by: Devaraj Rangasamy 
---
  drivers/crypto/ccp/psp-dev.c |   6 +-
  drivers/crypto/ccp/tee-dev.c | 116 ++-
  drivers/crypto/ccp/tee-dev.h |   9 +--
  include/linux/psp-tee.h  |  47 ++
  4 files changed, 127 insertions(+), 51 deletions(-)

diff --git a/drivers/crypto/ccp/psp-dev.c b/drivers/crypto/ccp/psp-dev.c
index c9c741ac8442..2b86158d7435 100644
--- a/drivers/crypto/ccp/psp-dev.c
+++ b/drivers/crypto/ccp/psp-dev.c
@@ -161,13 +161,13 @@ int psp_dev_init(struct sp_device *sp)
goto e_err;
}
  
+	if (sp->set_psp_master_device)

+   sp->set_psp_master_device(sp);
+


This worries me a bit...  if psp_init() fails, it may still be referenced 
as the master device. What's the reason for moving it?



ret = psp_init(psp);
if (ret)
goto e_irq;
  
-	if (sp->set_psp_master_device)

-   sp->set_psp_master_device(sp);
-
/* Enable interrupt */
iowrite32(-1, psp->io_regs + psp->vdata->inten_reg);
  
diff --git a/drivers/crypto/ccp/tee-dev.c b/drivers/crypto/ccp/tee-dev.c

index 5c9d47f3be37..1631d9851e54 100644
--- a/drivers/crypto/ccp/tee-dev.c
+++ b/drivers/crypto/ccp/tee-dev.c
@@ -12,8 +12,9 @@
  #include 
  #include 
  #include 
+#include 
+#include 
  #include 
-#include 
  #include 
  
  #include "psp-dev.h"

@@ -21,25 +22,64 @@
  
  static bool psp_dead;
  
+struct dma_buffer *psp_tee_alloc_dmabuf(unsigned long size, gfp_t gfp)


It looks like both calls to this use the same gfp_t values, you can 
probably eliminate from the call and just specify them in here.



+{
+   struct psp_device *psp = psp_get_master_device();
+   struct dma_buffer *dma_buf;
+   struct iommu_domain *dom;
+
+   if (!psp || !size)
+   return NULL;
+
+   dma_buf = kzalloc(sizeof(*dma_buf), GFP_KERNEL);
+   if (!dma_buf)
+   return NULL;
+
+   dma_buf->vaddr = dma_alloc_coherent(psp->dev, size, _buf->dma, gfp);
+   if (!dma_buf->vaddr || !dma_buf->dma) {


I don't think you can have one of these be NULL without both being NULL, 
but I guess it doesn't hurt to check.



+   kfree(dma_buf);
+   return NULL;
+   }
+
+   dma_buf->size = size;
+ > +dom = iommu_get_domain_for_dev(psp->dev);
+   if (dom)


You need a nice comment above this all explaining that. I guess you're 
using the presence of a domain to determine whether you're running on 
bare-metal vs within a hypervisor? I'm not sure what will happen if the 
guest ever gets an emulated IOMMU...



+   dma_buf->paddr = iommu_iova_to_phys(dom, dma_buf->dma);


If you're just looking to get the physical address, why not just to an 
__pa(dma_buf->vaddr)?


Also, paddr might not be the best name, since it isn't always a physical 
address, but I can't really think of something right now.


Thanks,
Tom


+   else
+   dma_buf->paddr = dma_buf->dma;
+
+   return dma_buf;
+}
+EXPORT_SYMBOL(psp_tee_alloc_dmabuf);
+
+void psp_tee_free_dmabuf(struct dma_buffer *dma_buf)
+{
+   struct psp_device *psp = psp_get_master_device();
+
+   if (!psp || !dma_buf)
+   return;
+
+   dma_free_coherent(psp->dev, dma_buf->size,
+ dma_buf->vaddr, dma_buf->dma);
+
+   kfree(dma_buf);
+}
+EXPORT_SYMBOL(psp_tee_free_dmabuf);
+
  static int tee_alloc_ring(struct psp_tee_device *tee, int ring_size)
  {
struct ring_buf_manager *rb_mgr = >rb_mgr;
-   void *start_addr;
  
  	if (!ring_size)

return -EINVAL;
  
-	/* We need act

Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

2021-09-24 Thread Tom Lendacky

On 9/24/21 4:51 AM, Borislav Petkov wrote:

On Fri, Sep 24, 2021 at 12:41:32PM +0300, Kirill A. Shutemov wrote:

On Thu, Sep 23, 2021 at 08:21:03PM +0200, Borislav Petkov wrote:

On Thu, Sep 23, 2021 at 12:05:58AM +0300, Kirill A. Shutemov wrote:

Unless we find other way to guarantee RIP-relative access, we must use
fixup_pointer() to access any global variables.


Yah, I've asked compiler folks about any guarantees we have wrt
rip-relative addresses but it doesn't look good. Worst case, we'd have
to do the fixup_pointer() thing.

In the meantime, Tom and I did some more poking at this and here's a
diff ontop.

The direction being that we'll stick both the AMD and Intel
*cc_platform_has() call into cc_platform.c for which instrumentation
will be disabled so no issues with that.

And that will keep all that querying all together in a single file.


And still do cc_platform_has() calls in __startup_64() codepath?

It's broken.

Intel detection in cc_platform_has() relies on boot_cpu_data.x86_vendor
which is not initialized until early_cpu_init() in setup_arch(). Given
that X86_VENDOR_INTEL is 0 it leads to false-positive.


Yeah, Tom, I had the same question yesterday.

/me looks in his direction.



Yup, all the things we talked about.

But we also know that cc_platform_has() gets called a few other times 
before boot_cpu_data is initialized - so more false-positives. For 
cc_platform_has() to work properly, given the desire to consolidate the 
calls, IMO, Intel will have to come up with some early setting that can be 
enabled and checked in place of boot_cpu_data or else you live with 
false-positives.


Thanks,
Tom


:-)



Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

2021-09-22 Thread Tom Lendacky

On 9/21/21 4:58 PM, Kirill A. Shutemov wrote:

On Tue, Sep 21, 2021 at 04:43:59PM -0500, Tom Lendacky wrote:

On 9/21/21 4:34 PM, Kirill A. Shutemov wrote:

On Tue, Sep 21, 2021 at 11:27:17PM +0200, Borislav Petkov wrote:

On Wed, Sep 22, 2021 at 12:20:59AM +0300, Kirill A. Shutemov wrote:

I still believe calling cc_platform_has() from __startup_64() is totally
broken as it lacks proper wrapping while accessing global variables.


Well, one of the issues on the AMD side was using boot_cpu_data too
early and the Intel side uses it too. Can you replace those checks with
is_tdx_guest() or whatever was the helper's name which would check
whether the the kernel is running as a TDX guest, and see if that helps?


There's no need in Intel check this early. Only AMD need it. Maybe just
opencode them?


Any way you can put a gzipped/bzipped copy of your vmlinux file somewhere I
can grab it from and take a look at it?


You can find broken vmlinux and bzImage here:

https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdrive.google.com%2Fdrive%2Ffolders%2F1n74vUQHOGebnF70Im32qLFY8iS3wvjIs%3Fusp%3Dsharingdata=04%7C01%7Cthomas.lendacky%40amd.com%7C1c7adf380cbe4c1a6bb708d97d4af6ff%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637678583935705530%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=gA30x%2Bfu97tUx0p2UqI8HgjiL8bxDbK1GqgJBbUrUE4%3Dreserved=0

Let me know when I can remove it.


Looking at everything, it is all RIP relative addressing, so those
accesses should be fine. Your image has the intel_cc_platform_has()
function, does it work if you remove that call? Because I think it may be
the early call into that function which looks like it has instrumentation
that uses %gs in __sanitizer_cov_trace_pc and %gs is not setup properly
yet. And since boot_cpu_data.x86_vendor will likely be zero this early it
will match X86_VENDOR_INTEL and call into that function.

8124f880 :
8124f880:   e8 bb 64 06 00  callq  812b5d40 
<__fentry__>
8124f885:   e8 36 ca 42 00  callq  8167c2c0 
<__sanitizer_cov_trace_pc>
8124f88a:   31 c0   xor%eax,%eax
8124f88c:   c3  retq


8167c2c0 <__sanitizer_cov_trace_pc>:
8167c2c0:   65 8b 05 39 ad 9a 7emov%gs:0x7e9aad39(%rip),%eax  
  # 27000 <__preempt_count>
8167c2c7:   89 c6   mov%eax,%esi
8167c2c9:   48 8b 0c 24 mov(%rsp),%rcx
8167c2cd:   81 e6 00 01 00 00   and$0x100,%esi
8167c2d3:   65 48 8b 14 25 40 70mov%gs:0x27040,%rdx

Thanks,
Tom





Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

2021-09-21 Thread Tom Lendacky

On 9/21/21 4:34 PM, Kirill A. Shutemov wrote:

On Tue, Sep 21, 2021 at 11:27:17PM +0200, Borislav Petkov wrote:

On Wed, Sep 22, 2021 at 12:20:59AM +0300, Kirill A. Shutemov wrote:

I still believe calling cc_platform_has() from __startup_64() is totally
broken as it lacks proper wrapping while accessing global variables.


Well, one of the issues on the AMD side was using boot_cpu_data too
early and the Intel side uses it too. Can you replace those checks with
is_tdx_guest() or whatever was the helper's name which would check
whether the the kernel is running as a TDX guest, and see if that helps?


There's no need in Intel check this early. Only AMD need it. Maybe just
opencode them?


Any way you can put a gzipped/bzipped copy of your vmlinux file somewhere 
I can grab it from and take a look at it?


Thanks,
Tom





Re: [PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

2021-09-21 Thread Tom Lendacky

On 9/20/21 2:23 PM, Kirill A. Shutemov wrote:

On Wed, Sep 08, 2021 at 05:58:36PM -0500, Tom Lendacky wrote:

diff --git a/arch/x86/mm/mem_encrypt_identity.c 
b/arch/x86/mm/mem_encrypt_identity.c
index 470b20208430..eff4d19f9cb4 100644
--- a/arch/x86/mm/mem_encrypt_identity.c
+++ b/arch/x86/mm/mem_encrypt_identity.c
@@ -30,6 +30,7 @@
  #include 
  #include 
  #include 
+#include 
  
  #include 

  #include 
@@ -287,7 +288,7 @@ void __init sme_encrypt_kernel(struct boot_params *bp)
unsigned long pgtable_area_len;
unsigned long decrypted_base;
  
-	if (!sme_active())

+   if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
return;
  
  	/*


This change break boot for me (in KVM on Intel host). It only reproduces
with allyesconfig. More reasonable config works fine, but I didn't try to
find exact cause in config.


Looks like instrumentation during early boot. I worked with Boris offline 
to exclude arch/x86/kernel/cc_platform.c from some of the instrumentation 
and that allowed an allyesconfig to boot.


Thanks,
Tom



Convertion to cc_platform_has() in __startup_64() in 8/8 has the same
effect.

I believe it caused by sme_me_mask access from __startup_64() without
fixup_pointer() magic. I think __startup_64() requires special treatement
and we should avoid cc_platform_has() there (or have a special version of
the helper). Note that only AMD requires these cc_platform_has() to return
true.



Re: [PATCH v3 8/8] treewide: Replace the use of mem_encrypt_active() with cc_platform_has()

2021-09-09 Thread Tom Lendacky

On 9/9/21 2:25 AM, Christophe Leroy wrote:



On 9/8/21 10:58 PM, Tom Lendacky wrote:


diff --git a/arch/powerpc/include/asm/mem_encrypt.h 
b/arch/powerpc/include/asm/mem_encrypt.h

index ba9dab07c1be..2f26b8fc8d29 100644
--- a/arch/powerpc/include/asm/mem_encrypt.h
+++ b/arch/powerpc/include/asm/mem_encrypt.h
@@ -10,11 +10,6 @@
  #include 
-static inline bool mem_encrypt_active(void)
-{
-    return is_secure_guest();
-}
-
  static inline bool force_dma_unencrypted(struct device *dev)
  {
  return is_secure_guest();
diff --git a/arch/powerpc/platforms/pseries/svm.c 
b/arch/powerpc/platforms/pseries/svm.c

index 87f001b4c4e4..c083ecbbae4d 100644
--- a/arch/powerpc/platforms/pseries/svm.c
+++ b/arch/powerpc/platforms/pseries/svm.c
@@ -8,6 +8,7 @@
  #include 
  #include 
+#include 
  #include 
  #include 
  #include 
@@ -63,7 +64,7 @@ void __init svm_swiotlb_init(void)
  int set_memory_encrypted(unsigned long addr, int numpages)
  {
-    if (!mem_encrypt_active())
+    if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))
  return 0;
  if (!PAGE_ALIGNED(addr))
@@ -76,7 +77,7 @@ int set_memory_encrypted(unsigned long addr, int 
numpages)

  int set_memory_decrypted(unsigned long addr, int numpages)
  {
-    if (!mem_encrypt_active())
+    if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))
  return 0;
  if (!PAGE_ALIGNED(addr))


This change unnecessarily complexifies the two functions. This is due to 
cc_platform_has() being out-line. It should really remain inline.


Please see previous discussion(s) on this series for why the function is
implemented out of line and for the naming:

V1: https://lore.kernel.org/lkml/cover.1627424773.git.thomas.lenda...@amd.com/

V2: https://lore.kernel.org/lkml/cover.1628873970.git.thomas.lenda...@amd.com/

Thanks,
Tom



Before the change we got:

 <.set_memory_encrypted>:
    0:    7d 20 00 a6 mfmsr   r9
    4:    75 29 00 40 andis.  r9,r9,64
    8:    41 82 00 48 beq 50 <.set_memory_encrypted+0x50>
    c:    78 69 04 20 clrldi  r9,r3,48
   10:    2c 29 00 00 cmpdi   r9,0
   14:    40 82 00 4c bne 60 <.set_memory_encrypted+0x60>
   18:    7c 08 02 a6 mflr    r0
   1c:    7c 85 23 78 mr  r5,r4
   20:    78 64 85 02 rldicl  r4,r3,48,20
   24:    61 23 f1 34 ori r3,r9,61748
   28:    f8 01 00 10 std r0,16(r1)
   2c:    f8 21 ff 91 stdu    r1,-112(r1)
   30:    48 00 00 01 bl  30 <.set_memory_encrypted+0x30>
     30: R_PPC64_REL24    .ucall_norets
   34:    60 00 00 00 nop
   38:    38 60 00 00 li  r3,0
   3c:    38 21 00 70 addi    r1,r1,112
   40:    e8 01 00 10 ld  r0,16(r1)
   44:    7c 08 03 a6 mtlr    r0
   48:    4e 80 00 20 blr
   50:    38 60 00 00 li  r3,0
   54:    4e 80 00 20 blr
   60:    38 60 ff ea li  r3,-22
   64:    4e 80 00 20 blr

After the change we get:

 <.set_memory_encrypted>:
    0:    7c 08 02 a6 mflr    r0
    4:    fb c1 ff f0 std r30,-16(r1)
    8:    fb e1 ff f8 std r31,-8(r1)
    c:    7c 7f 1b 78 mr  r31,r3
   10:    38 60 00 00 li  r3,0
   14:    7c 9e 23 78 mr  r30,r4
   18:    f8 01 00 10 std r0,16(r1)
   1c:    f8 21 ff 81 stdu    r1,-128(r1)
   20:    48 00 00 01 bl  20 <.set_memory_encrypted+0x20>
     20: R_PPC64_REL24    .cc_platform_has
   24:    60 00 00 00 nop
   28:    2c 23 00 00 cmpdi   r3,0
   2c:    41 82 00 44 beq 70 <.set_memory_encrypted+0x70>
   30:    7b e9 04 20 clrldi  r9,r31,48
   34:    2c 29 00 00 cmpdi   r9,0
   38:    40 82 00 58 bne 90 <.set_memory_encrypted+0x90>
   3c:    38 60 00 00 li  r3,0
   40:    7f c5 f3 78 mr  r5,r30
   44:    7b e4 85 02 rldicl  r4,r31,48,20
   48:    60 63 f1 34 ori r3,r3,61748
   4c:    48 00 00 01 bl  4c <.set_memory_encrypted+0x4c>
     4c: R_PPC64_REL24    .ucall_norets
   50:    60 00 00 00 nop
   54:    38 60 00 00 li  r3,0
   58:    38 21 00 80 addi    r1,r1,128
   5c:    e8 01 00 10 ld  r0,16(r1)
   60:    eb c1 ff f0 ld  r30,-16(r1)
   64:    eb e1 ff f8 ld  r31,-8(r1)
   68:    7c 08 03 a6 mtlr    r0
   6c:    4e 80 00 20 blr
   70:    38 21 00 80 addi    r1,r1,128
   74:    38 60 00 00 li  r3,0
   78:    e8 01 00 10 ld  r0,16(r1)
   7c:    eb c1 ff f0 ld  r30,-16(r1)
   80:    eb e1 ff f8 ld  r31,-8(r1)
   84:    7c 08 03 a6 mtlr    r0
   88:    4e 80 00 20 blr
   90:    38 60 ff ea li  r3,-22
   94:    4b ff ff c4 b   58 <.set_memory_encrypted+0x58>



Re: [PATCH v3 0/8] Implement generic cc_platform_has() helper function

2021-09-09 Thread Tom Lendacky

On 9/9/21 2:32 AM, Christian Borntraeger wrote:



On 09.09.21 00:58, Tom Lendacky wrote:

This patch series provides a generic helper function, cc_platform_has(),
to replace the sme_active(), sev_active(), sev_es_active() and
mem_encrypt_active() functions.

It is expected that as new confidential computing technologies are
added to the kernel, they can all be covered by a single function call
instead of a collection of specific function calls all called from the
same locations.

The powerpc and s390 patches have been compile tested only. Can the
folks copied on this series verify that nothing breaks for them.


Is there a tree somewhere?


I pushed it up to github:

https://github.com/AMDESE/linux/tree/prot-guest-has-v3

Thanks,
Tom



  Also,

a new file, arch/powerpc/platforms/pseries/cc_platform.c, has been
created for powerpc to hold the out of line function.

Cc: Andi Kleen 
Cc: Andy Lutomirski 
Cc: Ard Biesheuvel 
Cc: Baoquan He 
Cc: Benjamin Herrenschmidt 
Cc: Borislav Petkov 
Cc: Christian Borntraeger 
Cc: Daniel Vetter 
Cc: Dave Hansen 
Cc: Dave Young 
Cc: David Airlie 
Cc: Heiko Carstens 
Cc: Ingo Molnar 
Cc: Joerg Roedel 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: Michael Ellerman 
Cc: Paul Mackerras 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Thomas Zimmermann 
Cc: Vasily Gorbik 
Cc: VMware Graphics 
Cc: Will Deacon 
Cc: Christoph Hellwig 

---

Patches based on:
   
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Ftorvalds%2Flinux.gitdata=04%7C01%7Cthomas.lendacky%40amd.com%7C5cd71ef2c2ce4b90060708d973640358%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637667695657121432%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=FVngrPSxCCRKutAaIMtU2Nk8WArFQB1dEE2wN7v8RgA%3Dreserved=0 
master
   4b93c544e90e ("thunderbolt: test: split up test cases in 
tb_test_credit_alloc_all")


Changes since v2:
- Changed the name from prot_guest_has() to cc_platform_has()
- Took the cc_platform_has() function out of line. Created two new files,
   cc_platform.c, in both x86 and ppc to implment the function. As a
   result, also changed the attribute defines into enums.
- Removed any received Reviewed-by's and Acked-by's given changes in this
   version.
- Added removal of new instances of mem_encrypt_active() usage in powerpc
   arch.
- Based on latest Linux tree to pick up powerpc changes related to the
   mem_encrypt_active() function.

Changes since v1:
- Moved some arch ioremap functions within #ifdef CONFIG_AMD_MEM_ENCRYPT
   in prep for use of prot_guest_has() by TDX.
- Added type includes to the the protected_guest.h header file to prevent
   build errors outside of x86.
- Made amd_prot_guest_has() EXPORT_SYMBOL_GPL
- Used amd_prot_guest_has() in place of checking sme_me_mask in the
   arch/x86/mm/mem_encrypt.c file.

Tom Lendacky (8):
   x86/ioremap: Selectively build arch override encryption functions
   mm: Introduce a function to check for confidential computing features
   x86/sev: Add an x86 version of cc_platform_has()
   powerpc/pseries/svm: Add a powerpc version of cc_platform_has()
   x86/sme: Replace occurrences of sme_active() with cc_platform_has()
   x86/sev: Replace occurrences of sev_active() with cc_platform_has()
   x86/sev: Replace occurrences of sev_es_active() with cc_platform_has()
   treewide: Replace the use of mem_encrypt_active() with
 cc_platform_has()

  arch/Kconfig |  3 +
  arch/powerpc/include/asm/mem_encrypt.h   |  5 --
  arch/powerpc/platforms/pseries/Kconfig   |  1 +
  arch/powerpc/platforms/pseries/Makefile  |  2 +
  arch/powerpc/platforms/pseries/cc_platform.c | 26 ++
  arch/powerpc/platforms/pseries/svm.c |  5 +-
  arch/s390/include/asm/mem_encrypt.h  |  2 -
  arch/x86/Kconfig |  1 +
  arch/x86/include/asm/io.h    |  8 ++
  arch/x86/include/asm/kexec.h |  2 +-
  arch/x86/include/asm/mem_encrypt.h   | 14 +---
  arch/x86/kernel/Makefile |  3 +
  arch/x86/kernel/cc_platform.c    | 21 +
  arch/x86/kernel/crash_dump_64.c  |  4 +-
  arch/x86/kernel/head64.c |  4 +-
  arch/x86/kernel/kvm.c    |  3 +-
  arch/x86/kernel/kvmclock.c   |  4 +-
  arch/x86/kernel/machine_kexec_64.c   | 19 +++--
  arch/x86/kernel/pci-swiotlb.c    |  9 +-
  arch/x86/kernel/relocate_kernel_64.S |  2 +-
  arch/x86/kernel/sev.c    |  6 +-
  arch/x86/kvm/svm/svm.c   |  3 +-
  arch/x86/mm/ioremap.c    | 18 ++--
  arch/x86/mm/mem_encrypt.c    | 57 +++--
  arch/x86/mm/mem_encrypt_identity.c   |  3 +-
  arch/x86/mm/pat/set_memory.c |  3 +-
  arch/x86/platform/efi/efi_64.c   |  9 +-
  arch/x8

[PATCH v3 8/8] treewide: Replace the use of mem_encrypt_active() with cc_platform_has()

2021-09-08 Thread Tom Lendacky
Replace uses of mem_encrypt_active() with calls to cc_platform_has() with
the CC_ATTR_MEM_ENCRYPT attribute.

Remove the implementation of mem_encrypt_active() across all arches.

For s390, since the default implementation of the cc_platform_has()
matches the s390 implementation of mem_encrypt_active(), cc_platform_has()
does not need to be implemented in s390 (the config option
ARCH_HAS_CC_PLATFORM is not set).

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Cc: David Airlie 
Cc: Daniel Vetter 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: Thomas Zimmermann 
Cc: VMware Graphics 
Cc: Joerg Roedel 
Cc: Will Deacon 
Cc: Dave Young 
Cc: Baoquan He 
Cc: Michael Ellerman 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Cc: Heiko Carstens 
Cc: Vasily Gorbik 
Cc: Christian Borntraeger 
Signed-off-by: Tom Lendacky 
---
 arch/powerpc/include/asm/mem_encrypt.h  | 5 -
 arch/powerpc/platforms/pseries/svm.c| 5 +++--
 arch/s390/include/asm/mem_encrypt.h | 2 --
 arch/x86/include/asm/mem_encrypt.h  | 5 -
 arch/x86/kernel/head64.c| 4 ++--
 arch/x86/mm/ioremap.c   | 4 ++--
 arch/x86/mm/mem_encrypt.c   | 2 +-
 arch/x86/mm/pat/set_memory.c| 3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +++-
 drivers/gpu/drm/drm_cache.c | 4 ++--
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 ++--
 drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +++---
 drivers/iommu/amd/iommu.c   | 3 ++-
 drivers/iommu/amd/iommu_v2.c| 3 ++-
 drivers/iommu/iommu.c   | 3 ++-
 fs/proc/vmcore.c| 6 +++---
 include/linux/mem_encrypt.h | 4 
 kernel/dma/swiotlb.c| 4 ++--
 18 files changed, 31 insertions(+), 40 deletions(-)

diff --git a/arch/powerpc/include/asm/mem_encrypt.h 
b/arch/powerpc/include/asm/mem_encrypt.h
index ba9dab07c1be..2f26b8fc8d29 100644
--- a/arch/powerpc/include/asm/mem_encrypt.h
+++ b/arch/powerpc/include/asm/mem_encrypt.h
@@ -10,11 +10,6 @@
 
 #include 
 
-static inline bool mem_encrypt_active(void)
-{
-   return is_secure_guest();
-}
-
 static inline bool force_dma_unencrypted(struct device *dev)
 {
return is_secure_guest();
diff --git a/arch/powerpc/platforms/pseries/svm.c 
b/arch/powerpc/platforms/pseries/svm.c
index 87f001b4c4e4..c083ecbbae4d 100644
--- a/arch/powerpc/platforms/pseries/svm.c
+++ b/arch/powerpc/platforms/pseries/svm.c
@@ -8,6 +8,7 @@
 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -63,7 +64,7 @@ void __init svm_swiotlb_init(void)
 
 int set_memory_encrypted(unsigned long addr, int numpages)
 {
-   if (!mem_encrypt_active())
+   if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))
return 0;
 
if (!PAGE_ALIGNED(addr))
@@ -76,7 +77,7 @@ int set_memory_encrypted(unsigned long addr, int numpages)
 
 int set_memory_decrypted(unsigned long addr, int numpages)
 {
-   if (!mem_encrypt_active())
+   if (!cc_platform_has(CC_ATTR_MEM_ENCRYPT))
return 0;
 
if (!PAGE_ALIGNED(addr))
diff --git a/arch/s390/include/asm/mem_encrypt.h 
b/arch/s390/include/asm/mem_encrypt.h
index 2542cbf7e2d1..08a8b96606d7 100644
--- a/arch/s390/include/asm/mem_encrypt.h
+++ b/arch/s390/include/asm/mem_encrypt.h
@@ -4,8 +4,6 @@
 
 #ifndef __ASSEMBLY__
 
-static inline bool mem_encrypt_active(void) { return false; }
-
 int set_memory_encrypted(unsigned long addr, int numpages);
 int set_memory_decrypted(unsigned long addr, int numpages);
 
diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 499440781b39..ed954aa5c448 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -98,11 +98,6 @@ static inline void mem_encrypt_free_decrypted_mem(void) { }
 
 extern char __start_bss_decrypted[], __end_bss_decrypted[], 
__start_bss_decrypted_unused[];
 
-static inline bool mem_encrypt_active(void)
-{
-   return sme_me_mask;
-}
-
 static inline u64 sme_get_me_mask(void)
 {
return sme_me_mask;
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index de01903c3735..f98c76a1d16c 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -19,7 +19,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 
 #include 
@@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
 * there is no need to zero it after changing the memory encryption
 * attribute.
 */
-   if (mem_encrypt_active()) {
+   if (cc_platform_has(CC_ATTR_MEM_ENCRYPT)) {
vaddr = (unsigned long)__start_bss_decrypted;
vaddr_end = (unsigned long)__end_bss_decrypted;
for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index b59a5cbc6bc5..026031b3b782 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86

[PATCH v3 7/8] x86/sev: Replace occurrences of sev_es_active() with cc_platform_has()

2021-09-08 Thread Tom Lendacky
Replace uses of sev_es_active() with the more generic cc_platform_has()
using CC_ATTR_GUEST_STATE_ENCRYPT. If future support is added for other
memory encyrption techonologies, the use of CC_ATTR_GUEST_STATE_ENCRYPT
can be updated, as required.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h |  2 --
 arch/x86/kernel/sev.c  |  6 +++---
 arch/x86/mm/mem_encrypt.c  | 14 --
 arch/x86/realmode/init.c   |  3 +--
 4 files changed, 8 insertions(+), 17 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index f440eebeeb2c..499440781b39 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -51,7 +51,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
 void __init mem_encrypt_init(void);
 
 void __init sev_es_init_vc_handling(void);
-bool sev_es_active(void);
 bool amd_cc_platform_has(enum cc_attr attr);
 
 #define __bss_decrypted __section(".bss..decrypted")
@@ -75,7 +74,6 @@ static inline void __init sme_encrypt_kernel(struct 
boot_params *bp) { }
 static inline void __init sme_enable(struct boot_params *bp) { }
 
 static inline void sev_es_init_vc_handling(void) { }
-static inline bool sev_es_active(void) { return false; }
 static inline bool amd_cc_platform_has(enum cc_attr attr) { return false; }
 
 static inline int __init
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index a6895e440bc3..53a6837d354b 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -11,7 +11,7 @@
 
 #include  /* For show_regs() */
 #include 
-#include 
+#include 
 #include 
 #include 
 #include 
@@ -615,7 +615,7 @@ int __init sev_es_efi_map_ghcbs(pgd_t *pgd)
int cpu;
u64 pfn;
 
-   if (!sev_es_active())
+   if (!cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT))
return 0;
 
pflags = _PAGE_NX | _PAGE_RW;
@@ -774,7 +774,7 @@ void __init sev_es_init_vc_handling(void)
 
BUILD_BUG_ON(offsetof(struct sev_es_runtime_data, ghcb_page) % 
PAGE_SIZE);
 
-   if (!sev_es_active())
+   if (!cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT))
return;
 
if (!sev_es_check_cpu_features())
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 22d4e152a6de..47d571a2cd28 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -373,13 +373,6 @@ int __init early_set_memory_encrypted(unsigned long vaddr, 
unsigned long size)
  * up under SME the trampoline area cannot be encrypted, whereas under SEV
  * the trampoline area must be encrypted.
  */
-
-/* Needs to be called from non-instrumentable code */
-bool noinstr sev_es_active(void)
-{
-   return sev_status & MSR_AMD64_SEV_ES_ENABLED;
-}
-
 bool amd_cc_platform_has(enum cc_attr attr)
 {
switch (attr) {
@@ -393,7 +386,7 @@ bool amd_cc_platform_has(enum cc_attr attr)
return sev_status & MSR_AMD64_SEV_ENABLED;
 
case CC_ATTR_GUEST_STATE_ENCRYPT:
-   return sev_es_active();
+   return sev_status & MSR_AMD64_SEV_ES_ENABLED;
 
default:
return false;
@@ -469,7 +462,7 @@ static void print_mem_encrypt_feature_info(void)
pr_cont(" SEV");
 
/* Encrypted Register State */
-   if (sev_es_active())
+   if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT))
pr_cont(" SEV-ES");
 
pr_cont("\n");
@@ -488,7 +481,8 @@ void __init mem_encrypt_init(void)
 * With SEV, we need to unroll the rep string I/O instructions,
 * but SEV-ES supports them through the #VC handler.
 */
-   if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) && !sev_es_active())
+   if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT) &&
+   !cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT))
static_branch_enable(_enable_key);
 
print_mem_encrypt_feature_info();
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index c878c5ee5a4c..4a3da7592b99 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -2,7 +2,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 
@@ -48,7 +47,7 @@ static void sme_sev_setup_real_mode(struct trampoline_header 
*th)
if (cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
th->flags |= TH_FLAGS_SME_ACTIVE;
 
-   if (sev_es_active()) {
+   if (cc_platform_has(CC_ATTR_GUEST_STATE_ENCRYPT)) {
/*
 * Skip the call to verify_cpu() in secondary_startup_64 as it
 * will cause #VC exceptions when the AP can't handle them yet.
-- 
2.33.0



[PATCH v3 6/8] x86/sev: Replace occurrences of sev_active() with cc_platform_has()

2021-09-08 Thread Tom Lendacky
Replace uses of sev_active() with the more generic cc_platform_has()
using CC_ATTR_GUEST_MEM_ENCRYPT. If future support is added for other
memory encryption technologies, the use of CC_ATTR_GUEST_MEM_ENCRYPT
can be updated, as required.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Cc: Ard Biesheuvel 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h |  2 --
 arch/x86/kernel/crash_dump_64.c|  4 +++-
 arch/x86/kernel/kvm.c  |  3 ++-
 arch/x86/kernel/kvmclock.c |  4 ++--
 arch/x86/kernel/machine_kexec_64.c |  4 ++--
 arch/x86/kvm/svm/svm.c |  3 ++-
 arch/x86/mm/ioremap.c  |  6 +++---
 arch/x86/mm/mem_encrypt.c  | 25 ++---
 arch/x86/platform/efi/efi_64.c |  9 +
 9 files changed, 29 insertions(+), 31 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 8c4f0dfe63f9..f440eebeeb2c 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -51,7 +51,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
 void __init mem_encrypt_init(void);
 
 void __init sev_es_init_vc_handling(void);
-bool sev_active(void);
 bool sev_es_active(void);
 bool amd_cc_platform_has(enum cc_attr attr);
 
@@ -76,7 +75,6 @@ static inline void __init sme_encrypt_kernel(struct 
boot_params *bp) { }
 static inline void __init sme_enable(struct boot_params *bp) { }
 
 static inline void sev_es_init_vc_handling(void) { }
-static inline bool sev_active(void) { return false; }
 static inline bool sev_es_active(void) { return false; }
 static inline bool amd_cc_platform_has(enum cc_attr attr) { return false; }
 
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 045e82e8945b..a7f617a3981d 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -10,6 +10,7 @@
 #include 
 #include 
 #include 
+#include 
 
 static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
  unsigned long offset, int userbuf,
@@ -73,5 +74,6 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char 
*buf, size_t csize,
 
 ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
 {
-   return read_from_oldmem(buf, count, ppos, 0, sev_active());
+   return read_from_oldmem(buf, count, ppos, 0,
+   cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT));
 }
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index a26643dc6bd6..509a578f56a0 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -27,6 +27,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -418,7 +419,7 @@ static void __init sev_map_percpu_data(void)
 {
int cpu;
 
-   if (!sev_active())
+   if (!cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
return;
 
for_each_possible_cpu(cpu) {
diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index ad273e5861c1..fc3930c5db1b 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -16,9 +16,9 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
-#include 
 #include 
 #include 
 
@@ -232,7 +232,7 @@ static void __init kvmclock_init_mem(void)
 * hvclock is shared between the guest and the hypervisor, must
 * be mapped decrypted.
 */
-   if (sev_active()) {
+   if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
r = set_memory_decrypted((unsigned long) hvclock_mem,
 1UL << order);
if (r) {
diff --git a/arch/x86/kernel/machine_kexec_64.c 
b/arch/x86/kernel/machine_kexec_64.c
index 7040c0fa921c..f5da4a18070a 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -167,7 +167,7 @@ static int init_transition_pgtable(struct kimage *image, 
pgd_t *pgd)
}
pte = pte_offset_kernel(pmd, vaddr);
 
-   if (sev_active())
+   if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
prot = PAGE_KERNEL_EXEC;
 
set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, prot));
@@ -207,7 +207,7 @@ static int init_pgtable(struct kimage *image, unsigned long 
start_pgtable)
level4p = (pgd_t *)__va(start_pgtable);
clear_page(level4p);
 
-   if (sev_active()) {
+   if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) {
info.page_flag   |= _PAGE_ENC;
info.kernpg_flag |= _PAGE_ENC;
}
diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
index 69639f9624f5..eb3669154b48 100644
--- a/arch/x86/kvm/svm/svm.c
+++ b/arch/x86/kvm/svm/svm.c
@@ -25,6 +25,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -457,7 +458,7 @@ static int has_svm(void)
return 0;
}
 
-   if

[PATCH v3 5/8] x86/sme: Replace occurrences of sme_active() with cc_platform_has()

2021-09-08 Thread Tom Lendacky
Replace uses of sme_active() with the more generic cc_platform_has()
using CC_ATTR_HOST_MEM_ENCRYPT. If future support is added for other
memory encryption technologies, the use of CC_ATTR_HOST_MEM_ENCRYPT
can be updated, as required.

This also replaces two usages of sev_active() that are really geared
towards detecting if SME is active.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Cc: Joerg Roedel 
Cc: Will Deacon 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/kexec.h |  2 +-
 arch/x86/include/asm/mem_encrypt.h   |  2 --
 arch/x86/kernel/machine_kexec_64.c   | 15 ---
 arch/x86/kernel/pci-swiotlb.c|  9 -
 arch/x86/kernel/relocate_kernel_64.S |  2 +-
 arch/x86/mm/ioremap.c|  6 +++---
 arch/x86/mm/mem_encrypt.c| 15 +--
 arch/x86/mm/mem_encrypt_identity.c   |  3 ++-
 arch/x86/realmode/init.c |  5 +++--
 drivers/iommu/amd/init.c |  7 ---
 10 files changed, 31 insertions(+), 35 deletions(-)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 0a6e34b07017..11b7c06e2828 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -129,7 +129,7 @@ relocate_kernel(unsigned long indirection_page,
unsigned long page_list,
unsigned long start_address,
unsigned int preserve_context,
-   unsigned int sme_active);
+   unsigned int host_mem_enc_active);
 #endif
 
 #define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 3d8a5e8b2e3f..8c4f0dfe63f9 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -51,7 +51,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
 void __init mem_encrypt_init(void);
 
 void __init sev_es_init_vc_handling(void);
-bool sme_active(void);
 bool sev_active(void);
 bool sev_es_active(void);
 bool amd_cc_platform_has(enum cc_attr attr);
@@ -77,7 +76,6 @@ static inline void __init sme_encrypt_kernel(struct 
boot_params *bp) { }
 static inline void __init sme_enable(struct boot_params *bp) { }
 
 static inline void sev_es_init_vc_handling(void) { }
-static inline bool sme_active(void) { return false; }
 static inline bool sev_active(void) { return false; }
 static inline bool sev_es_active(void) { return false; }
 static inline bool amd_cc_platform_has(enum cc_attr attr) { return false; }
diff --git a/arch/x86/kernel/machine_kexec_64.c 
b/arch/x86/kernel/machine_kexec_64.c
index 131f30fdcfbd..7040c0fa921c 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -17,6 +17,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -358,7 +359,7 @@ void machine_kexec(struct kimage *image)
   (unsigned long)page_list,
   image->start,
   image->preserve_context,
-  sme_active());
+  
cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT));
 
 #ifdef CONFIG_KEXEC_JUMP
if (image->preserve_context)
@@ -569,12 +570,12 @@ void arch_kexec_unprotect_crashkres(void)
  */
 int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp)
 {
-   if (sev_active())
+   if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
return 0;
 
/*
-* If SME is active we need to be sure that kexec pages are
-* not encrypted because when we boot to the new kernel the
+* If host memory encryption is active we need to be sure that kexec
+* pages are not encrypted because when we boot to the new kernel the
 * pages won't be accessed encrypted (initially).
 */
return set_memory_decrypted((unsigned long)vaddr, pages);
@@ -582,12 +583,12 @@ int arch_kexec_post_alloc_pages(void *vaddr, unsigned int 
pages, gfp_t gfp)
 
 void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages)
 {
-   if (sev_active())
+   if (!cc_platform_has(CC_ATTR_HOST_MEM_ENCRYPT))
return;
 
/*
-* If SME is active we need to reset the pages back to being
-* an encrypted mapping before freeing them.
+* If host memory encryption is active we need to reset the pages back
+* to being an encrypted mapping before freeing them.
 */
set_memory_encrypted((unsigned long)vaddr, pages);
 }
diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
index c2cfa5e7c152..814ab46a0dad 100644
--- a/arch/x86/kernel/pci-swiotlb.c
+++ b/arch/x86/kernel/pci-swiotlb.c
@@ -6,7 +6,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 
 #include 
 #include 
@@ -45,11 +45,10 @@ int __init pci_swiotlb_detect_4gb(void)
sw

[PATCH v3 4/8] powerpc/pseries/svm: Add a powerpc version of cc_platform_has()

2021-09-08 Thread Tom Lendacky
Introduce a powerpc version of the cc_platform_has() function. This will
be used to replace the powerpc mem_encrypt_active() implementation, so
the implementation will initially only support the CC_ATTR_MEM_ENCRYPT
attribute.

Cc: Michael Ellerman 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Signed-off-by: Tom Lendacky 
---
 arch/powerpc/platforms/pseries/Kconfig   |  1 +
 arch/powerpc/platforms/pseries/Makefile  |  2 ++
 arch/powerpc/platforms/pseries/cc_platform.c | 26 
 3 files changed, 29 insertions(+)
 create mode 100644 arch/powerpc/platforms/pseries/cc_platform.c

diff --git a/arch/powerpc/platforms/pseries/Kconfig 
b/arch/powerpc/platforms/pseries/Kconfig
index 5e037df2a3a1..2e57391e0778 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -159,6 +159,7 @@ config PPC_SVM
select SWIOTLB
select ARCH_HAS_MEM_ENCRYPT
select ARCH_HAS_FORCE_DMA_UNENCRYPTED
+   select ARCH_HAS_CC_PLATFORM
help
 There are certain POWER platforms which support secure guests using
 the Protected Execution Facility, with the help of an Ultravisor
diff --git a/arch/powerpc/platforms/pseries/Makefile 
b/arch/powerpc/platforms/pseries/Makefile
index 4cda0ef87be0..41d8aee98da4 100644
--- a/arch/powerpc/platforms/pseries/Makefile
+++ b/arch/powerpc/platforms/pseries/Makefile
@@ -31,3 +31,5 @@ obj-$(CONFIG_FA_DUMP) += rtas-fadump.o
 
 obj-$(CONFIG_SUSPEND)  += suspend.o
 obj-$(CONFIG_PPC_VAS)  += vas.o
+
+obj-$(CONFIG_ARCH_HAS_CC_PLATFORM) += cc_platform.o
diff --git a/arch/powerpc/platforms/pseries/cc_platform.c 
b/arch/powerpc/platforms/pseries/cc_platform.c
new file mode 100644
index ..e8021af83a19
--- /dev/null
+++ b/arch/powerpc/platforms/pseries/cc_platform.c
@@ -0,0 +1,26 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Confidential Computing Platform Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#include 
+#include 
+
+#include 
+#include 
+
+bool cc_platform_has(enum cc_attr attr)
+{
+   switch (attr) {
+   case CC_ATTR_MEM_ENCRYPT:
+   return is_secure_guest();
+
+   default:
+   return false;
+   }
+}
+EXPORT_SYMBOL_GPL(cc_platform_has);
-- 
2.33.0



[PATCH v3 3/8] x86/sev: Add an x86 version of cc_platform_has()

2021-09-08 Thread Tom Lendacky
Introduce an x86 version of the cc_platform_has() function. This will be
used to replace vendor specific calls like sme_active(), sev_active(),
etc.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Co-developed-by: Andi Kleen 
Signed-off-by: Andi Kleen 
Co-developed-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Tom Lendacky 
---
 arch/x86/Kconfig   |  1 +
 arch/x86/include/asm/mem_encrypt.h |  3 +++
 arch/x86/kernel/Makefile   |  3 +++
 arch/x86/kernel/cc_platform.c  | 21 +
 arch/x86/mm/mem_encrypt.c  | 21 +
 5 files changed, 49 insertions(+)
 create mode 100644 arch/x86/kernel/cc_platform.c

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 4e001425..2b2a9639d8ae 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1513,6 +1513,7 @@ config AMD_MEM_ENCRYPT
select ARCH_HAS_FORCE_DMA_UNENCRYPTED
select INSTRUCTION_DECODER
select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
+   select ARCH_HAS_CC_PLATFORM
help
  Say yes to enable support for the encryption of system memory.
  This requires an AMD processor that supports Secure Memory
diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 9c80c68d75b5..3d8a5e8b2e3f 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -13,6 +13,7 @@
 #ifndef __ASSEMBLY__
 
 #include 
+#include 
 
 #include 
 
@@ -53,6 +54,7 @@ void __init sev_es_init_vc_handling(void);
 bool sme_active(void);
 bool sev_active(void);
 bool sev_es_active(void);
+bool amd_cc_platform_has(enum cc_attr attr);
 
 #define __bss_decrypted __section(".bss..decrypted")
 
@@ -78,6 +80,7 @@ static inline void sev_es_init_vc_handling(void) { }
 static inline bool sme_active(void) { return false; }
 static inline bool sev_active(void) { return false; }
 static inline bool sev_es_active(void) { return false; }
+static inline bool amd_cc_platform_has(enum cc_attr attr) { return false; }
 
 static inline int __init
 early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 
0; }
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 8f4e8fa6ed75..f91403a78594 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -147,6 +147,9 @@ obj-$(CONFIG_UNWINDER_FRAME_POINTER)+= 
unwind_frame.o
 obj-$(CONFIG_UNWINDER_GUESS)   += unwind_guess.o
 
 obj-$(CONFIG_AMD_MEM_ENCRYPT)  += sev.o
+
+obj-$(CONFIG_ARCH_HAS_CC_PLATFORM) += cc_platform.o
+
 ###
 # 64 bit specific files
 ifeq ($(CONFIG_X86_64),y)
diff --git a/arch/x86/kernel/cc_platform.c b/arch/x86/kernel/cc_platform.c
new file mode 100644
index ..3c9bacd3c3f3
--- /dev/null
+++ b/arch/x86/kernel/cc_platform.c
@@ -0,0 +1,21 @@
+// SPDX-License-Identifier: GPL-2.0-only
+/*
+ * Confidential Computing Platform Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#include 
+#include 
+#include 
+
+bool cc_platform_has(enum cc_attr attr)
+{
+   if (sme_me_mask)
+   return amd_cc_platform_has(attr);
+
+   return false;
+}
+EXPORT_SYMBOL_GPL(cc_platform_has);
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ff08dc463634..18fe19916bc3 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -389,6 +390,26 @@ bool noinstr sev_es_active(void)
return sev_status & MSR_AMD64_SEV_ES_ENABLED;
 }
 
+bool amd_cc_platform_has(enum cc_attr attr)
+{
+   switch (attr) {
+   case CC_ATTR_MEM_ENCRYPT:
+   return sme_me_mask != 0;
+
+   case CC_ATTR_HOST_MEM_ENCRYPT:
+   return sme_active();
+
+   case CC_ATTR_GUEST_MEM_ENCRYPT:
+   return sev_active();
+
+   case CC_ATTR_GUEST_STATE_ENCRYPT:
+   return sev_es_active();
+
+   default:
+   return false;
+   }
+}
+
 /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
 bool force_dma_unencrypted(struct device *dev)
 {
-- 
2.33.0



[PATCH v3 2/8] mm: Introduce a function to check for confidential computing features

2021-09-08 Thread Tom Lendacky
In prep for other confidential computing technologies, introduce a generic
helper function, cc_platform_has(), that can be used to check for specific
active confidential computing attributes, like memory encryption. This is
intended to eliminate having to add multiple technology-specific checks to
the code (e.g. if (sev_active() || tdx_active())).

Co-developed-by: Andi Kleen 
Signed-off-by: Andi Kleen 
Co-developed-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Tom Lendacky 
---
 arch/Kconfig|  3 ++
 include/linux/cc_platform.h | 88 +
 2 files changed, 91 insertions(+)
 create mode 100644 include/linux/cc_platform.h

diff --git a/arch/Kconfig b/arch/Kconfig
index 3743174da870..ca7c359e5da8 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1234,6 +1234,9 @@ config RELR
 config ARCH_HAS_MEM_ENCRYPT
bool
 
+config ARCH_HAS_CC_PLATFORM
+   bool
+
 config HAVE_SPARSE_SYSCALL_NR
bool
help
diff --git a/include/linux/cc_platform.h b/include/linux/cc_platform.h
new file mode 100644
index ..253f3ea66cd8
--- /dev/null
+++ b/include/linux/cc_platform.h
@@ -0,0 +1,88 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Confidential Computing Platform Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#ifndef _CC_PLATFORM_H
+#define _CC_PLATFORM_H
+
+#include 
+#include 
+
+/**
+ * enum cc_attr - Confidential computing attributes
+ *
+ * These attributes represent confidential computing features that are
+ * currently active.
+ */
+enum cc_attr {
+   /**
+* @CC_ATTR_MEM_ENCRYPT: Memory encryption is active
+*
+* The platform/OS is running with active memory encryption. This
+* includes running either as a bare-metal system or a hypervisor
+* and actively using memory encryption or as a guest/virtual machine
+* and actively using memory encryption.
+*
+* Examples include SME, SEV and SEV-ES.
+*/
+   CC_ATTR_MEM_ENCRYPT,
+
+   /**
+* @CC_ATTR_HOST_MEM_ENCRYPT: Host memory encryption is active
+*
+* The platform/OS is running as a bare-metal system or a hypervisor
+* and actively using memory encryption.
+*
+* Examples include SME.
+*/
+   CC_ATTR_HOST_MEM_ENCRYPT,
+
+   /**
+* @CC_ATTR_GUEST_MEM_ENCRYPT: Guest memory encryption is active
+*
+* The platform/OS is running as a guest/virtual machine and actively
+* using memory encryption.
+*
+* Examples include SEV and SEV-ES.
+*/
+   CC_ATTR_GUEST_MEM_ENCRYPT,
+
+   /**
+* @CC_ATTR_GUEST_STATE_ENCRYPT: Guest state encryption is active
+*
+* The platform/OS is running as a guest/virtual machine and actively
+* using memory encryption and register state encryption.
+*
+* Examples include SEV-ES.
+*/
+   CC_ATTR_GUEST_STATE_ENCRYPT,
+};
+
+#ifdef CONFIG_ARCH_HAS_CC_PLATFORM
+
+/**
+ * cc_platform_has() - Checks if the specified cc_attr attribute is active
+ * @attr: Confidential computing attribute to check
+ *
+ * The cc_platform_has() function will return an indicator as to whether the
+ * specified Confidential Computing attribute is currently active.
+ *
+ * Context: Any context
+ * Return:
+ * * TRUE  - Specified Confidential Computing attribute is active
+ * * FALSE - Specified Confidential Computing attribute is not active
+ */
+bool cc_platform_has(enum cc_attr attr);
+
+#else  /* !CONFIG_ARCH_HAS_CC_PLATFORM */
+
+static inline bool cc_platform_has(enum cc_attr attr) { return false; }
+
+#endif /* CONFIG_ARCH_HAS_CC_PLATFORM */
+
+#endif /* _CC_PLATFORM_H */
-- 
2.33.0



[PATCH v3 1/8] x86/ioremap: Selectively build arch override encryption functions

2021-09-08 Thread Tom Lendacky
In prep for other uses of the cc_platform_has() function besides AMD's
memory encryption support, selectively build the AMD memory encryption
architecture override functions only when CONFIG_AMD_MEM_ENCRYPT=y. These
functions are:
- early_memremap_pgprot_adjust()
- arch_memremap_can_ram_remap()

Additionally, routines that are only invoked by these architecture
override functions can also be conditionally built. These functions are:
- memremap_should_map_decrypted()
- memremap_is_efi_data()
- memremap_is_setup_data()
- early_memremap_is_setup_data()

And finally, phys_mem_access_encrypted() is conditionally built as well,
but requires a static inline version of it when CONFIG_AMD_MEM_ENCRYPT is
not set.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/io.h | 8 
 arch/x86/mm/ioremap.c | 2 +-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index 841a5d104afa..5c6a4af0b911 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -391,6 +391,7 @@ extern void arch_io_free_memtype_wc(resource_size_t start, 
resource_size_t size)
 #define arch_io_reserve_memtype_wc arch_io_reserve_memtype_wc
 #endif
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
 extern bool arch_memremap_can_ram_remap(resource_size_t offset,
unsigned long size,
unsigned long flags);
@@ -398,6 +399,13 @@ extern bool arch_memremap_can_ram_remap(resource_size_t 
offset,
 
 extern bool phys_mem_access_encrypted(unsigned long phys_addr,
  unsigned long size);
+#else
+static inline bool phys_mem_access_encrypted(unsigned long phys_addr,
+unsigned long size)
+{
+   return true;
+}
+#endif
 
 /**
  * iosubmit_cmds512 - copy data to single MMIO location, in 512-bit units
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 60ade7dd71bd..ccff76cedd8f 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -508,6 +508,7 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr)
memunmap((void *)((unsigned long)addr & PAGE_MASK));
 }
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
 /*
  * Examine the physical address to determine if it is an area of memory
  * that should be mapped decrypted.  If the memory is not part of the
@@ -746,7 +747,6 @@ bool phys_mem_access_encrypted(unsigned long phys_addr, 
unsigned long size)
return arch_memremap_can_ram_remap(phys_addr, size, 0);
 }
 
-#ifdef CONFIG_AMD_MEM_ENCRYPT
 /* Remap memory with encryption */
 void __init *early_memremap_encrypted(resource_size_t phys_addr,
  unsigned long size)
-- 
2.33.0



[PATCH v3 0/8] Implement generic cc_platform_has() helper function

2021-09-08 Thread Tom Lendacky
This patch series provides a generic helper function, cc_platform_has(),
to replace the sme_active(), sev_active(), sev_es_active() and
mem_encrypt_active() functions.

It is expected that as new confidential computing technologies are
added to the kernel, they can all be covered by a single function call
instead of a collection of specific function calls all called from the
same locations.

The powerpc and s390 patches have been compile tested only. Can the
folks copied on this series verify that nothing breaks for them. Also,
a new file, arch/powerpc/platforms/pseries/cc_platform.c, has been
created for powerpc to hold the out of line function.

Cc: Andi Kleen 
Cc: Andy Lutomirski 
Cc: Ard Biesheuvel 
Cc: Baoquan He 
Cc: Benjamin Herrenschmidt 
Cc: Borislav Petkov 
Cc: Christian Borntraeger 
Cc: Daniel Vetter 
Cc: Dave Hansen 
Cc: Dave Young 
Cc: David Airlie 
Cc: Heiko Carstens 
Cc: Ingo Molnar 
Cc: Joerg Roedel 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: Michael Ellerman 
Cc: Paul Mackerras 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Thomas Zimmermann 
Cc: Vasily Gorbik 
Cc: VMware Graphics 
Cc: Will Deacon 
Cc: Christoph Hellwig 

---

Patches based on:
  https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
  4b93c544e90e ("thunderbolt: test: split up test cases in 
tb_test_credit_alloc_all")

Changes since v2:
- Changed the name from prot_guest_has() to cc_platform_has()
- Took the cc_platform_has() function out of line. Created two new files,
  cc_platform.c, in both x86 and ppc to implment the function. As a
  result, also changed the attribute defines into enums.
- Removed any received Reviewed-by's and Acked-by's given changes in this
  version.
- Added removal of new instances of mem_encrypt_active() usage in powerpc
  arch.
- Based on latest Linux tree to pick up powerpc changes related to the
  mem_encrypt_active() function.

Changes since v1:
- Moved some arch ioremap functions within #ifdef CONFIG_AMD_MEM_ENCRYPT
  in prep for use of prot_guest_has() by TDX.
- Added type includes to the the protected_guest.h header file to prevent
  build errors outside of x86.
- Made amd_prot_guest_has() EXPORT_SYMBOL_GPL
- Used amd_prot_guest_has() in place of checking sme_me_mask in the
  arch/x86/mm/mem_encrypt.c file.

Tom Lendacky (8):
  x86/ioremap: Selectively build arch override encryption functions
  mm: Introduce a function to check for confidential computing features
  x86/sev: Add an x86 version of cc_platform_has()
  powerpc/pseries/svm: Add a powerpc version of cc_platform_has()
  x86/sme: Replace occurrences of sme_active() with cc_platform_has()
  x86/sev: Replace occurrences of sev_active() with cc_platform_has()
  x86/sev: Replace occurrences of sev_es_active() with cc_platform_has()
  treewide: Replace the use of mem_encrypt_active() with
cc_platform_has()

 arch/Kconfig |  3 +
 arch/powerpc/include/asm/mem_encrypt.h   |  5 --
 arch/powerpc/platforms/pseries/Kconfig   |  1 +
 arch/powerpc/platforms/pseries/Makefile  |  2 +
 arch/powerpc/platforms/pseries/cc_platform.c | 26 ++
 arch/powerpc/platforms/pseries/svm.c |  5 +-
 arch/s390/include/asm/mem_encrypt.h  |  2 -
 arch/x86/Kconfig |  1 +
 arch/x86/include/asm/io.h|  8 ++
 arch/x86/include/asm/kexec.h |  2 +-
 arch/x86/include/asm/mem_encrypt.h   | 14 +---
 arch/x86/kernel/Makefile |  3 +
 arch/x86/kernel/cc_platform.c| 21 +
 arch/x86/kernel/crash_dump_64.c  |  4 +-
 arch/x86/kernel/head64.c |  4 +-
 arch/x86/kernel/kvm.c|  3 +-
 arch/x86/kernel/kvmclock.c   |  4 +-
 arch/x86/kernel/machine_kexec_64.c   | 19 +++--
 arch/x86/kernel/pci-swiotlb.c|  9 +-
 arch/x86/kernel/relocate_kernel_64.S |  2 +-
 arch/x86/kernel/sev.c|  6 +-
 arch/x86/kvm/svm/svm.c   |  3 +-
 arch/x86/mm/ioremap.c| 18 ++--
 arch/x86/mm/mem_encrypt.c| 57 +++--
 arch/x86/mm/mem_encrypt_identity.c   |  3 +-
 arch/x86/mm/pat/set_memory.c |  3 +-
 arch/x86/platform/efi/efi_64.c   |  9 +-
 arch/x86/realmode/init.c |  8 +-
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c  |  4 +-
 drivers/gpu/drm/drm_cache.c  |  4 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c  |  4 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_msg.c  |  6 +-
 drivers/iommu/amd/init.c |  7 +-
 drivers/iommu/amd/iommu.c|  3 +-
 drivers/iommu/amd/iommu_v2.c |  3 +-
 drivers/iommu/iommu.c|  3 +-
 fs/proc/vmcore.c |  6 +-
 include/linux/cc_platform.h  | 88 
 include/linux/mem_encrypt.h

Re: [PATCH v2 04/12] powerpc/pseries/svm: Add a powerpc version of prot_guest_has()

2021-08-19 Thread Tom Lendacky
On 8/19/21 4:55 AM, Christoph Hellwig wrote:
> On Fri, Aug 13, 2021 at 11:59:23AM -0500, Tom Lendacky wrote:
>> +static inline bool prot_guest_has(unsigned int attr)
> 
> No reall need to have this inline.  In fact I'd suggest we havea the
> prototype in a common header so that everyone must implement it out
> of line.

I'll do the same thing I end up doing for x86.

Thanks,
Tom

> 


Re: [PATCH v2 03/12] x86/sev: Add an x86 version of prot_guest_has()

2021-08-19 Thread Tom Lendacky
On 8/19/21 4:52 AM, Christoph Hellwig wrote:
> On Fri, Aug 13, 2021 at 11:59:22AM -0500, Tom Lendacky wrote:
>> While the name suggests this is intended mainly for guests, it will
>> also be used for host memory encryption checks in place of sme_active().
> 
> Which suggest that the name is not good to start with.  Maybe protected
> hardware, system or platform might be a better choice?
> 
>> +static inline bool prot_guest_has(unsigned int attr)
>> +{
>> +#ifdef CONFIG_AMD_MEM_ENCRYPT
>> +if (sme_me_mask)
>> +return amd_prot_guest_has(attr);
>> +#endif
>> +
>> +return false;
>> +}
> 
> Shouldn't this be entirely out of line?

I did it as inline originally because the presence of the function will be
decided based on the ARCH_HAS_PROTECTED_GUEST config. For now, that is
only selected by the AMD memory encryption support, so if I went out of
line I could put in mem_encrypt.c. But with TDX wanting to also use it, it
would have to be in an always built file with some #ifdefs or in its own
file that is conditionally built based on the ARCH_HAS_PROTECTED_GUEST
setting (they've already tried building with ARCH_HAS_PROTECTED_GUEST=y
and AMD_MEM_ENCRYPT not set).

To take it out of line, I'm leaning towards the latter, creating a new
file that is built based on the ARCH_HAS_PROTECTED_GUEST setting.

> 
>> +/* 0x800 - 0x8ff reserved for AMD */
>> +#define PATTR_SME   0x800
>> +#define PATTR_SEV   0x801
>> +#define PATTR_SEV_ES0x802
> 
> Why do we need reservations for a purely in-kernel namespace?
> 
> And why are you overoading a brand new generic API with weird details
> of a specific implementation like this?

There was some talk about this on the mailing list where TDX and SEV may
need to be differentiated, so we wanted to reserve a range of values per
technology. I guess I can remove them until they are actually needed.

Thanks,
Tom

> 


Re: [PATCH v2 02/12] mm: Introduce a function to check for virtualization protection features

2021-08-19 Thread Tom Lendacky
On 8/19/21 4:46 AM, Christoph Hellwig wrote:
> On Fri, Aug 13, 2021 at 11:59:21AM -0500, Tom Lendacky wrote:
>> +#define PATTR_MEM_ENCRYPT   0   /* Encrypted memory */
>> +#define PATTR_HOST_MEM_ENCRYPT  1   /* Host encrypted 
>> memory */
>> +#define PATTR_GUEST_MEM_ENCRYPT 2   /* Guest encrypted 
>> memory */
>> +#define PATTR_GUEST_PROT_STATE  3   /* Guest encrypted 
>> state */
> 
> Please write an actual detailed explanaton of what these mean, that
> is what implications it has on the kernel.

Will do.

Thanks,
Tom

> 


Re: [PATCH v2 09/12] mm: Remove the now unused mem_encrypt_active() function

2021-08-17 Thread Tom Lendacky
On 8/17/21 5:24 AM, Borislav Petkov wrote:
> On Tue, Aug 17, 2021 at 12:22:33PM +0200, Borislav Petkov wrote:
>> This one wants to be part of the previous patch.
> 
> ... and the three following patches too - the treewide patch does a
> single atomic :) replacement and that's it.

Ok, I'll squash those all together.

Thanks,
Tom

> 


Re: [PATCH v2 06/12] x86/sev: Replace occurrences of sev_active() with prot_guest_has()

2021-08-17 Thread Tom Lendacky
On 8/17/21 5:02 AM, Borislav Petkov wrote:
> On Fri, Aug 13, 2021 at 11:59:25AM -0500, Tom Lendacky wrote:
>> diff --git a/arch/x86/kernel/machine_kexec_64.c 
>> b/arch/x86/kernel/machine_kexec_64.c
>> index 8e7b517ad738..66ff788b79c9 100644
>> --- a/arch/x86/kernel/machine_kexec_64.c
>> +++ b/arch/x86/kernel/machine_kexec_64.c
>> @@ -167,7 +167,7 @@ static int init_transition_pgtable(struct kimage *image, 
>> pgd_t *pgd)
>>  }
>>  pte = pte_offset_kernel(pmd, vaddr);
>>  
>> -if (sev_active())
>> +if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT))
>>  prot = PAGE_KERNEL_EXEC;
>>  
>>  set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, prot));
>> @@ -207,7 +207,7 @@ static int init_pgtable(struct kimage *image, unsigned 
>> long start_pgtable)
>>  level4p = (pgd_t *)__va(start_pgtable);
>>  clear_page(level4p);
>>  
>> -if (sev_active()) {
>> +if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) {
>>  info.page_flag   |= _PAGE_ENC;
>>  info.kernpg_flag |= _PAGE_ENC;
>>  }
>> @@ -570,12 +570,12 @@ void arch_kexec_unprotect_crashkres(void)
>>   */
>>  int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp)
>>  {
>> -if (sev_active())
>> +if (!prot_guest_has(PATTR_HOST_MEM_ENCRYPT))
>>  return 0;
>>  
>>  /*
>> - * If SME is active we need to be sure that kexec pages are
>> - * not encrypted because when we boot to the new kernel the
>> + * If host memory encryption is active we need to be sure that kexec
>> + * pages are not encrypted because when we boot to the new kernel the
>>   * pages won't be accessed encrypted (initially).
>>   */
> 
> That hunk belongs logically into the previous patch which removes
> sme_active().

I was trying to keep the sev_active() changes separate... so even though
it's an SME thing, I kept it here. But I can move it to the previous
patch, it just might look strange.

> 
>>  return set_memory_decrypted((unsigned long)vaddr, pages);
>> @@ -583,12 +583,12 @@ int arch_kexec_post_alloc_pages(void *vaddr, unsigned 
>> int pages, gfp_t gfp)
>>  
>>  void arch_kexec_pre_free_pages(void *vaddr, unsigned int pages)
>>  {
>> -if (sev_active())
>> +if (!prot_guest_has(PATTR_HOST_MEM_ENCRYPT))
>>  return;
>>  
>>  /*
>> - * If SME is active we need to reset the pages back to being
>> - * an encrypted mapping before freeing them.
>> + * If host memory encryption is active we need to reset the pages back
>> + * to being an encrypted mapping before freeing them.
>>   */
>>  set_memory_encrypted((unsigned long)vaddr, pages);
>>  }
>> diff --git a/arch/x86/kvm/svm/svm.c b/arch/x86/kvm/svm/svm.c
>> index e8ccab50ebf6..b69f5ac622d5 100644
>> --- a/arch/x86/kvm/svm/svm.c
>> +++ b/arch/x86/kvm/svm/svm.c
>> @@ -25,6 +25,7 @@
>>  #include 
>>  #include 
>>  #include 
>> +#include 
>>  
>>  #include 
>>  #include 
>> @@ -457,7 +458,7 @@ static int has_svm(void)
>>  return 0;
>>  }
>>  
>> -if (sev_active()) {
>> +if (prot_guest_has(PATTR_SEV)) {
>>  pr_info("KVM is unsupported when running as an SEV guest\n");
>>  return 0;
> 
> Same question as for PATTR_SME. PATTR_GUEST_MEM_ENCRYPT should be enough.

Yup, I'll change them all.

> 
>> @@ -373,7 +373,7 @@ int __init early_set_memory_encrypted(unsigned long 
>> vaddr, unsigned long size)
>>   * up under SME the trampoline area cannot be encrypted, whereas under SEV
>>   * the trampoline area must be encrypted.
>>   */
>> -bool sev_active(void)
>> +static bool sev_active(void)
>>  {
>>  return sev_status & MSR_AMD64_SEV_ENABLED;
>>  }
>> @@ -382,7 +382,6 @@ static bool sme_active(void)
>>  {
>>  return sme_me_mask && !sev_active();
>>  }
>> -EXPORT_SYMBOL_GPL(sev_active);
> 
> Just get rid of it altogether.

Ok.

Thanks,
Tom

> 
> Thx.
> 


Re: [PATCH v2 03/12] x86/sev: Add an x86 version of prot_guest_has()

2021-08-17 Thread Tom Lendacky
On 8/15/21 9:39 AM, Borislav Petkov wrote:
> On Sun, Aug 15, 2021 at 08:53:31AM -0500, Tom Lendacky wrote:
>> It's not a cross-vendor thing as opposed to a KVM or other hypervisor
>> thing where the family doesn't have to be reported as AMD or HYGON.
> 
> What would be the use case? A HV starts a guest which is supposed to be
> encrypted using the AMD's confidential guest technology but the HV tells
> the guest that it is not running on an AMD SVM HV but something else?
> 
> Is that even an actual use case?
> 
> Or am I way off?
> 
> I know we have talked about this in the past but this still sounds
> insane.

Maybe the KVM folks have a better understanding of it...

I can change it to be an AMD/HYGON check...  although, I'll have to check
to see if any (very) early use of the function will work with that.

At a minimum, the check in arch/x86/kernel/head64.c will have to be
changed or removed. I'll take a closer look.

Thanks,
Tom

> 


Re: [PATCH v2 05/12] x86/sme: Replace occurrences of sme_active() with prot_guest_has()

2021-08-17 Thread Tom Lendacky
On 8/17/21 4:00 AM, Borislav Petkov wrote:
> On Fri, Aug 13, 2021 at 11:59:24AM -0500, Tom Lendacky wrote:
>> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
>> index edc67ddf065d..5635ca9a1fbe 100644
>> --- a/arch/x86/mm/mem_encrypt.c
>> +++ b/arch/x86/mm/mem_encrypt.c
>> @@ -144,7 +144,7 @@ void __init sme_unmap_bootdata(char *real_mode_data)
>>  struct boot_params *boot_data;
>>  unsigned long cmdline_paddr;
>>  
>> -if (!sme_active())
>> +if (!amd_prot_guest_has(PATTR_SME))
>>  return;
>>  
>>  /* Get the command line address before unmapping the real_mode_data */
>> @@ -164,7 +164,7 @@ void __init sme_map_bootdata(char *real_mode_data)
>>  struct boot_params *boot_data;
>>  unsigned long cmdline_paddr;
>>  
>> -if (!sme_active())
>> +if (!amd_prot_guest_has(PATTR_SME))
>>  return;
>>  
>>  __sme_early_map_unmap_mem(real_mode_data, sizeof(boot_params), true);
>> @@ -378,7 +378,7 @@ bool sev_active(void)
>>  return sev_status & MSR_AMD64_SEV_ENABLED;
>>  }
>>  
>> -bool sme_active(void)
>> +static bool sme_active(void)
> 
> Just get rid of it altogether. Also, there's an
> 
> EXPORT_SYMBOL_GPL(sev_active);
> > which needs to go under the actual function. Here's a diff ontop:

Will do.

> 
> ---
> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> index 5635ca9a1fbe..a3a2396362a5 100644
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -364,8 +364,9 @@ int __init early_set_memory_encrypted(unsigned long 
> vaddr, unsigned long size)
>  /*
>   * SME and SEV are very similar but they are not the same, so there are
>   * times that the kernel will need to distinguish between SME and SEV. The
> - * sme_active() and sev_active() functions are used for this.  When a
> - * distinction isn't needed, the mem_encrypt_active() function can be used.
> + * PATTR_HOST_MEM_ENCRYPT and PATTR_GUEST_MEM_ENCRYPT flags to
> + * amd_prot_guest_has() are used for this. When a distinction isn't needed,
> + * the mem_encrypt_active() function can be used.
>   *
>   * The trampoline code is a good example for this requirement.  Before
>   * paging is activated, SME will access all memory as decrypted, but SEV
> @@ -377,11 +378,6 @@ bool sev_active(void)
>  {
>   return sev_status & MSR_AMD64_SEV_ENABLED;
>  }
> -
> -static bool sme_active(void)
> -{
> - return sme_me_mask && !sev_active();
> -}
>  EXPORT_SYMBOL_GPL(sev_active);
>  
>  /* Needs to be called from non-instrumentable code */
> @@ -398,7 +394,7 @@ bool amd_prot_guest_has(unsigned int attr)
>  
>   case PATTR_SME:
>   case PATTR_HOST_MEM_ENCRYPT:
> - return sme_active();
> + return sme_me_mask && !sev_active();
>  
>   case PATTR_SEV:
>   case PATTR_GUEST_MEM_ENCRYPT:
> 
>>  {
>>  return sme_me_mask && !sev_active();
>>  }
>> @@ -428,7 +428,7 @@ bool force_dma_unencrypted(struct device *dev)
>>   * device does not support DMA to addresses that include the
>>   * encryption mask.
>>   */
>> -if (sme_active()) {
>> +if (amd_prot_guest_has(PATTR_SME)) {
> 
> So I'm not sure: you add PATTR_SME which you call with
> amd_prot_guest_has() and PATTR_HOST_MEM_ENCRYPT which you call with
> prot_guest_has() and they both end up being the same thing on AMD.
> 
> So why even bother with PATTR_SME?
> 
> This is only going to cause confusion later and I'd say let's simply use
> prot_guest_has(PATTR_HOST_MEM_ENCRYPT) everywhere...

Ok, I can do that. I was trying to ensure that anything that is truly SME
or SEV specific would be called out now.

I'm ok with letting the TDX folks make changes to these calls to be SME or
SEV specific, if necessary, later.

Thanks,
Tom

> 


Re: [PATCH v2 04/12] powerpc/pseries/svm: Add a powerpc version of prot_guest_has()

2021-08-17 Thread Tom Lendacky
On 8/17/21 3:35 AM, Borislav Petkov wrote:
> On Fri, Aug 13, 2021 at 11:59:23AM -0500, Tom Lendacky wrote:
>> Introduce a powerpc version of the prot_guest_has() function. This will
>> be used to replace the powerpc mem_encrypt_active() implementation, so
>> the implementation will initially only support the PATTR_MEM_ENCRYPT
>> attribute.
>>
>> Cc: Michael Ellerman 
>> Cc: Benjamin Herrenschmidt 
>> Cc: Paul Mackerras 
>> Signed-off-by: Tom Lendacky 
>> ---
>>  arch/powerpc/include/asm/protected_guest.h | 30 ++
>>  arch/powerpc/platforms/pseries/Kconfig |  1 +
>>  2 files changed, 31 insertions(+)
>>  create mode 100644 arch/powerpc/include/asm/protected_guest.h
>>
>> diff --git a/arch/powerpc/include/asm/protected_guest.h 
>> b/arch/powerpc/include/asm/protected_guest.h
>> new file mode 100644
>> index ..ce55c2c7e534
>> --- /dev/null
>> +++ b/arch/powerpc/include/asm/protected_guest.h
>> @@ -0,0 +1,30 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +/*
>> + * Protected Guest (and Host) Capability checks
>> + *
>> + * Copyright (C) 2021 Advanced Micro Devices, Inc.
>> + *
>> + * Author: Tom Lendacky 
>> + */
>> +
>> +#ifndef _POWERPC_PROTECTED_GUEST_H
>> +#define _POWERPC_PROTECTED_GUEST_H
>> +
>> +#include 
>> +
>> +#ifndef __ASSEMBLY__
> 
> Same thing here. Pls audit the whole set whether those __ASSEMBLY__
> guards are really needed and remove them if not.

Will do.

Thanks,
Tom

> 
> Thx.
> 


Re: [PATCH v2 03/12] x86/sev: Add an x86 version of prot_guest_has()

2021-08-15 Thread Tom Lendacky

On 8/14/21 2:08 PM, Borislav Petkov wrote:

On Fri, Aug 13, 2021 at 11:59:22AM -0500, Tom Lendacky wrote:

diff --git a/arch/x86/include/asm/protected_guest.h 
b/arch/x86/include/asm/protected_guest.h
new file mode 100644
index ..51e4eefd9542
--- /dev/null
+++ b/arch/x86/include/asm/protected_guest.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Protected Guest (and Host) Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#ifndef _X86_PROTECTED_GUEST_H
+#define _X86_PROTECTED_GUEST_H
+
+#include 
+
+#ifndef __ASSEMBLY__
+
+static inline bool prot_guest_has(unsigned int attr)
+{
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+   if (sme_me_mask)
+   return amd_prot_guest_has(attr);
+#endif
+
+   return false;
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _X86_PROTECTED_GUEST_H */


I think this can be simplified more, diff ontop below:

- no need for the ifdeffery as amd_prot_guest_has() has versions for
both when CONFIG_AMD_MEM_ENCRYPT is set or not.


Ugh, yeah, not sure why I put that in for this version since I have the 
static inline for when CONFIG_AMD_MEM_ENCRYPT is not set.




- the sme_me_mask check is pushed there too.

- and since this is vendor-specific, I'm checking the vendor bit. Yeah,
yeah, cross-vendor but I don't really believe that.


It's not a cross-vendor thing as opposed to a KVM or other hypervisor 
thing where the family doesn't have to be reported as AMD or HYGON. That's 
why I made the if check be for sme_me_mask. I think that is the safer way 
to go.


Thanks,
Tom



---
diff --git a/arch/x86/include/asm/protected_guest.h 
b/arch/x86/include/asm/protected_guest.h
index 51e4eefd9542..8541c76d5da4 100644
--- a/arch/x86/include/asm/protected_guest.h
+++ b/arch/x86/include/asm/protected_guest.h
@@ -12,18 +12,13 @@
  
  #include 
  
-#ifndef __ASSEMBLY__

-
  static inline bool prot_guest_has(unsigned int attr)
  {
-#ifdef CONFIG_AMD_MEM_ENCRYPT
-   if (sme_me_mask)
+   if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD ||
+   boot_cpu_data.x86_vendor == X86_VENDOR_HYGON)
return amd_prot_guest_has(attr);
-#endif
  
  	return false;

  }
  
-#endif	/* __ASSEMBLY__ */

-
  #endif/* _X86_PROTECTED_GUEST_H */
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index edc67ddf065d..5a0442a6f072 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -392,6 +392,9 @@ bool noinstr sev_es_active(void)
  
  bool amd_prot_guest_has(unsigned int attr)

  {
+   if (!sme_me_mask)
+   return false;
+
switch (attr) {
case PATTR_MEM_ENCRYPT:
return sme_me_mask != 0;



Re: [PATCH v2 02/12] mm: Introduce a function to check for virtualization protection features

2021-08-14 Thread Tom Lendacky

On 8/14/21 1:32 PM, Borislav Petkov wrote:

On Fri, Aug 13, 2021 at 11:59:21AM -0500, Tom Lendacky wrote:

diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h
new file mode 100644
index ..43d4dde94793
--- /dev/null
+++ b/include/linux/protected_guest.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Protected Guest (and Host) Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#ifndef _PROTECTED_GUEST_H
+#define _PROTECTED_GUEST_H
+
+#ifndef __ASSEMBLY__

   ^

Do you really need that guard? It builds fine without it too. Or
something coming later does need it...?


No, I probably did it out of habit. I can remove it in the next version.

Thanks,
Tom





Re: [PATCH 07/11] treewide: Replace the use of mem_encrypt_active() with prot_guest_has()

2021-08-13 Thread Tom Lendacky

On 8/13/21 12:08 PM, Tom Lendacky wrote:

On 8/12/21 5:07 AM, Kirill A. Shutemov wrote:

On Wed, Aug 11, 2021 at 10:52:55AM -0500, Tom Lendacky wrote:

On 8/11/21 7:19 AM, Kirill A. Shutemov wrote:

On Tue, Aug 10, 2021 at 02:48:54PM -0500, Tom Lendacky wrote:

On 8/10/21 1:45 PM, Kuppuswamy, Sathyanarayanan wrote:

...

Looking at code agains, now I *think* the reason is accessing a global
variable from __startup_64() inside TDX version of prot_guest_has().

__startup_64() is special. If you access any global variable you need to
use fixup_pointer(). See comment before __startup_64().

I'm not sure how you get away with accessing sme_me_mask directly from
there. Any clues? Maybe just a luck and complier generates code just 
right

for your case, I donno.


Hmm... yeah, could be that the compiler is using rip-relative addressing
for it because it lives in the .data section?


I guess. It has to be fixed. It may break with complier upgrade or any
random change around the code.


I'll look at doing that separate from this series.



BTW, does it work with clang for you?


I haven't tried with clang, I'll check on that.


Just as an fyi, clang also uses rip relative addressing for those 
variables. No issues booting SME and SEV guests built with clang.


Thanks,
Tom



Thanks,
Tom





Re: [PATCH v2 00/12] Implement generic prot_guest_has() helper function

2021-08-13 Thread Tom Lendacky

On 8/13/21 11:59 AM, Tom Lendacky wrote:

This patch series provides a generic helper function, prot_guest_has(),
to replace the sme_active(), sev_active(), sev_es_active() and
mem_encrypt_active() functions.

It is expected that as new protected virtualization technologies are
added to the kernel, they can all be covered by a single function call
instead of a collection of specific function calls all called from the
same locations.

The powerpc and s390 patches have been compile tested only. Can the
folks copied on this series verify that nothing breaks for them.


There are some patches related to PPC that added new calls to the 
mem_encrypt_active() function that are not yet in the tip tree. After the 
merge window, I'll need to send a v3 with those additional changes before 
this series can be applied.


Thanks,
Tom


Re: [PATCH 07/11] treewide: Replace the use of mem_encrypt_active() with prot_guest_has()

2021-08-13 Thread Tom Lendacky

On 8/12/21 5:07 AM, Kirill A. Shutemov wrote:

On Wed, Aug 11, 2021 at 10:52:55AM -0500, Tom Lendacky wrote:

On 8/11/21 7:19 AM, Kirill A. Shutemov wrote:

On Tue, Aug 10, 2021 at 02:48:54PM -0500, Tom Lendacky wrote:

On 8/10/21 1:45 PM, Kuppuswamy, Sathyanarayanan wrote:

...

Looking at code agains, now I *think* the reason is accessing a global
variable from __startup_64() inside TDX version of prot_guest_has().

__startup_64() is special. If you access any global variable you need to
use fixup_pointer(). See comment before __startup_64().

I'm not sure how you get away with accessing sme_me_mask directly from
there. Any clues? Maybe just a luck and complier generates code just right
for your case, I donno.


Hmm... yeah, could be that the compiler is using rip-relative addressing
for it because it lives in the .data section?


I guess. It has to be fixed. It may break with complier upgrade or any
random change around the code.


I'll look at doing that separate from this series.



BTW, does it work with clang for you?


I haven't tried with clang, I'll check on that.

Thanks,
Tom





[PATCH v2 12/12] s390/mm: Remove the now unused mem_encrypt_active() function

2021-08-13 Thread Tom Lendacky
The mem_encrypt_active() function has been replaced by prot_guest_has(),
so remove the implementation. Since the default implementation of the
prot_guest_has() matches the s390 implementation of mem_encrypt_active(),
prot_guest_has() does not need to be implemented in s390 (the config
option ARCH_HAS_PROTECTED_GUEST is not set).

Cc: Heiko Carstens 
Cc: Vasily Gorbik 
Cc: Christian Borntraeger 
Signed-off-by: Tom Lendacky 
---
 arch/s390/include/asm/mem_encrypt.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/s390/include/asm/mem_encrypt.h 
b/arch/s390/include/asm/mem_encrypt.h
index 2542cbf7e2d1..08a8b96606d7 100644
--- a/arch/s390/include/asm/mem_encrypt.h
+++ b/arch/s390/include/asm/mem_encrypt.h
@@ -4,8 +4,6 @@
 
 #ifndef __ASSEMBLY__
 
-static inline bool mem_encrypt_active(void) { return false; }
-
 int set_memory_encrypted(unsigned long addr, int numpages);
 int set_memory_decrypted(unsigned long addr, int numpages);
 
-- 
2.32.0



[PATCH v2 11/12] powerpc/pseries/svm: Remove the now unused mem_encrypt_active() function

2021-08-13 Thread Tom Lendacky
The mem_encrypt_active() function has been replaced by prot_guest_has(),
so remove the implementation.

Cc: Michael Ellerman 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Signed-off-by: Tom Lendacky 
---
 arch/powerpc/include/asm/mem_encrypt.h | 5 -
 1 file changed, 5 deletions(-)

diff --git a/arch/powerpc/include/asm/mem_encrypt.h 
b/arch/powerpc/include/asm/mem_encrypt.h
index ba9dab07c1be..2f26b8fc8d29 100644
--- a/arch/powerpc/include/asm/mem_encrypt.h
+++ b/arch/powerpc/include/asm/mem_encrypt.h
@@ -10,11 +10,6 @@
 
 #include 
 
-static inline bool mem_encrypt_active(void)
-{
-   return is_secure_guest();
-}
-
 static inline bool force_dma_unencrypted(struct device *dev)
 {
return is_secure_guest();
-- 
2.32.0



[PATCH v2 10/12] x86/sev: Remove the now unused mem_encrypt_active() function

2021-08-13 Thread Tom Lendacky
The mem_encrypt_active() function has been replaced by prot_guest_has(),
so remove the implementation.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Reviewed-by: Joerg Roedel 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h | 5 -
 1 file changed, 5 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 797146e0cd6b..94c089e9ea69 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -97,11 +97,6 @@ static inline void mem_encrypt_free_decrypted_mem(void) { }
 
 extern char __start_bss_decrypted[], __end_bss_decrypted[], 
__start_bss_decrypted_unused[];
 
-static inline bool mem_encrypt_active(void)
-{
-   return sme_me_mask;
-}
-
 static inline u64 sme_get_me_mask(void)
 {
return sme_me_mask;
-- 
2.32.0



[PATCH v2 09/12] mm: Remove the now unused mem_encrypt_active() function

2021-08-13 Thread Tom Lendacky
The mem_encrypt_active() function has been replaced by prot_guest_has(),
so remove the implementation.

Reviewed-by: Joerg Roedel 
Signed-off-by: Tom Lendacky 
---
 include/linux/mem_encrypt.h | 4 
 1 file changed, 4 deletions(-)

diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index 5c4a18a91f89..ae4526389261 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -16,10 +16,6 @@
 
 #include 
 
-#else  /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
-
-static inline bool mem_encrypt_active(void) { return false; }
-
 #endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
-- 
2.32.0



[PATCH v2 08/12] treewide: Replace the use of mem_encrypt_active() with prot_guest_has()

2021-08-13 Thread Tom Lendacky
Replace occurrences of mem_encrypt_active() with calls to prot_guest_has()
with the PATTR_MEM_ENCRYPT attribute.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Cc: David Airlie 
Cc: Daniel Vetter 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: Thomas Zimmermann 
Cc: VMware Graphics 
Cc: Joerg Roedel 
Cc: Will Deacon 
Cc: Dave Young 
Cc: Baoquan He 
Signed-off-by: Tom Lendacky 
---
 arch/x86/kernel/head64.c| 4 ++--
 arch/x86/mm/ioremap.c   | 4 ++--
 arch/x86/mm/mem_encrypt.c   | 5 ++---
 arch/x86/mm/pat/set_memory.c| 3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +++-
 drivers/gpu/drm/drm_cache.c | 4 ++--
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 ++--
 drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +++---
 drivers/iommu/amd/iommu.c   | 3 ++-
 drivers/iommu/amd/iommu_v2.c| 3 ++-
 drivers/iommu/iommu.c   | 3 ++-
 fs/proc/vmcore.c| 6 +++---
 kernel/dma/swiotlb.c| 4 ++--
 13 files changed, 29 insertions(+), 24 deletions(-)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index de01903c3735..cafed6456d45 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -19,7 +19,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 
 #include 
@@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
 * there is no need to zero it after changing the memory encryption
 * attribute.
 */
-   if (mem_encrypt_active()) {
+   if (prot_guest_has(PATTR_MEM_ENCRYPT)) {
vaddr = (unsigned long)__start_bss_decrypted;
vaddr_end = (unsigned long)__end_bss_decrypted;
for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 3ed0f28f12af..7f012fc1b600 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -694,7 +694,7 @@ static bool __init 
early_memremap_is_setup_data(resource_size_t phys_addr,
 bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned long size,
 unsigned long flags)
 {
-   if (!mem_encrypt_active())
+   if (!prot_guest_has(PATTR_MEM_ENCRYPT))
return true;
 
if (flags & MEMREMAP_ENC)
@@ -724,7 +724,7 @@ pgprot_t __init 
early_memremap_pgprot_adjust(resource_size_t phys_addr,
 {
bool encrypted_prot;
 
-   if (!mem_encrypt_active())
+   if (!prot_guest_has(PATTR_MEM_ENCRYPT))
return prot;
 
encrypted_prot = true;
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 38dfa84b77a1..69aed9935b5e 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -364,8 +364,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, 
unsigned long size)
 /*
  * SME and SEV are very similar but they are not the same, so there are
  * times that the kernel will need to distinguish between SME and SEV. The
- * sme_active() and sev_active() functions are used for this.  When a
- * distinction isn't needed, the mem_encrypt_active() function can be used.
+ * sme_active() and sev_active() functions are used for this.
  *
  * The trampoline code is a good example for this requirement.  Before
  * paging is activated, SME will access all memory as decrypted, but SEV
@@ -451,7 +450,7 @@ void __init mem_encrypt_free_decrypted_mem(void)
 * The unused memory range was mapped decrypted, change the encryption
 * attribute from decrypted to encrypted before freeing it.
 */
-   if (mem_encrypt_active()) {
+   if (amd_prot_guest_has(PATTR_MEM_ENCRYPT)) {
r = set_memory_encrypted(vaddr, npages);
if (r) {
pr_warn("failed to free unused decrypted pages\n");
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index ad8a5c586a35..6925f2bb4be1 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -18,6 +18,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1986,7 +1987,7 @@ static int __set_memory_enc_dec(unsigned long addr, int 
numpages, bool enc)
int ret;
 
/* Nothing to do if memory encryption is not active */
-   if (!mem_encrypt_active())
+   if (!prot_guest_has(PATTR_MEM_ENCRYPT))
return 0;
 
/* Should not be working on unaligned addresses */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 971c5b8e75dc..21c1e3056070 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -38,6 +38,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "amdgpu.h"
 #include "amdgpu_irq.h"
@@ -1250,7 +1251,8 @@ static int 

[PATCH v2 07/12] x86/sev: Replace occurrences of sev_es_active() with prot_guest_has()

2021-08-13 Thread Tom Lendacky
Replace occurrences of sev_es_active() with the more generic
prot_guest_has() using PATTR_GUEST_PROT_STATE, except for in
arch/x86/kernel/sev*.c and arch/x86/mm/mem_encrypt*.c where PATTR_SEV_ES
will be used. If future support is added for other memory encyrption
techonologies, the use of PATTR_GUEST_PROT_STATE can be updated, as
required, to specifically use PATTR_SEV_ES.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h | 2 --
 arch/x86/kernel/sev.c  | 6 +++---
 arch/x86/mm/mem_encrypt.c  | 7 +++
 arch/x86/realmode/init.c   | 3 +--
 4 files changed, 7 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 7e25de37c148..797146e0cd6b 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -50,7 +50,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
 void __init mem_encrypt_init(void);
 
 void __init sev_es_init_vc_handling(void);
-bool sev_es_active(void);
 bool amd_prot_guest_has(unsigned int attr);
 
 #define __bss_decrypted __section(".bss..decrypted")
@@ -74,7 +73,6 @@ static inline void __init sme_encrypt_kernel(struct 
boot_params *bp) { }
 static inline void __init sme_enable(struct boot_params *bp) { }
 
 static inline void sev_es_init_vc_handling(void) { }
-static inline bool sev_es_active(void) { return false; }
 static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
 
 static inline int __init
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index a6895e440bc3..66a4ab9d95d7 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -11,7 +11,7 @@
 
 #include  /* For show_regs() */
 #include 
-#include 
+#include 
 #include 
 #include 
 #include 
@@ -615,7 +615,7 @@ int __init sev_es_efi_map_ghcbs(pgd_t *pgd)
int cpu;
u64 pfn;
 
-   if (!sev_es_active())
+   if (!prot_guest_has(PATTR_SEV_ES))
return 0;
 
pflags = _PAGE_NX | _PAGE_RW;
@@ -774,7 +774,7 @@ void __init sev_es_init_vc_handling(void)
 
BUILD_BUG_ON(offsetof(struct sev_es_runtime_data, ghcb_page) % 
PAGE_SIZE);
 
-   if (!sev_es_active())
+   if (!prot_guest_has(PATTR_SEV_ES))
return;
 
if (!sev_es_check_cpu_features())
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 83bc928f529e..38dfa84b77a1 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -383,8 +383,7 @@ static bool sme_active(void)
return sme_me_mask && !sev_active();
 }
 
-/* Needs to be called from non-instrumentable code */
-bool noinstr sev_es_active(void)
+static bool sev_es_active(void)
 {
return sev_status & MSR_AMD64_SEV_ES_ENABLED;
 }
@@ -482,7 +481,7 @@ static void print_mem_encrypt_feature_info(void)
pr_cont(" SEV");
 
/* Encrypted Register State */
-   if (sev_es_active())
+   if (amd_prot_guest_has(PATTR_SEV_ES))
pr_cont(" SEV-ES");
 
pr_cont("\n");
@@ -501,7 +500,7 @@ void __init mem_encrypt_init(void)
 * With SEV, we need to unroll the rep string I/O instructions,
 * but SEV-ES supports them through the #VC handler.
 */
-   if (amd_prot_guest_has(PATTR_SEV) && !sev_es_active())
+   if (amd_prot_guest_has(PATTR_SEV) && !amd_prot_guest_has(PATTR_SEV_ES))
static_branch_enable(_enable_key);
 
print_mem_encrypt_feature_info();
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index 2109ae569c67..7711d0071f41 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -2,7 +2,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 
@@ -48,7 +47,7 @@ static void sme_sev_setup_real_mode(struct trampoline_header 
*th)
if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT))
th->flags |= TH_FLAGS_SME_ACTIVE;
 
-   if (sev_es_active()) {
+   if (prot_guest_has(PATTR_GUEST_PROT_STATE)) {
/*
 * Skip the call to verify_cpu() in secondary_startup_64 as it
 * will cause #VC exceptions when the AP can't handle them yet.
-- 
2.32.0



[PATCH v2 06/12] x86/sev: Replace occurrences of sev_active() with prot_guest_has()

2021-08-13 Thread Tom Lendacky
Replace occurrences of sev_active() with the more generic prot_guest_has()
using PATTR_GUEST_MEM_ENCRYPT, except for in arch/x86/mm/mem_encrypt*.c
where PATTR_SEV will be used. If future support is added for other memory
encryption technologies, the use of PATTR_GUEST_MEM_ENCRYPT can be
updated, as required, to use PATTR_SEV.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Cc: Ard Biesheuvel 
Reviewed-by: Joerg Roedel 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h |  2 --
 arch/x86/kernel/crash_dump_64.c|  4 +++-
 arch/x86/kernel/kvm.c  |  3 ++-
 arch/x86/kernel/kvmclock.c |  4 ++--
 arch/x86/kernel/machine_kexec_64.c | 16 
 arch/x86/kvm/svm/svm.c |  3 ++-
 arch/x86/mm/ioremap.c  |  6 +++---
 arch/x86/mm/mem_encrypt.c  | 15 +++
 arch/x86/platform/efi/efi_64.c |  9 +
 9 files changed, 32 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 956338406cec..7e25de37c148 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -50,7 +50,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
 void __init mem_encrypt_init(void);
 
 void __init sev_es_init_vc_handling(void);
-bool sev_active(void);
 bool sev_es_active(void);
 bool amd_prot_guest_has(unsigned int attr);
 
@@ -75,7 +74,6 @@ static inline void __init sme_encrypt_kernel(struct 
boot_params *bp) { }
 static inline void __init sme_enable(struct boot_params *bp) { }
 
 static inline void sev_es_init_vc_handling(void) { }
-static inline bool sev_active(void) { return false; }
 static inline bool sev_es_active(void) { return false; }
 static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
 
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 045e82e8945b..0cfe35f03e67 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -10,6 +10,7 @@
 #include 
 #include 
 #include 
+#include 
 
 static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
  unsigned long offset, int userbuf,
@@ -73,5 +74,6 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char 
*buf, size_t csize,
 
 ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
 {
-   return read_from_oldmem(buf, count, ppos, 0, sev_active());
+   return read_from_oldmem(buf, count, ppos, 0,
+   prot_guest_has(PATTR_GUEST_MEM_ENCRYPT));
 }
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index a26643dc6bd6..9d08ad2f3faa 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -27,6 +27,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -418,7 +419,7 @@ static void __init sev_map_percpu_data(void)
 {
int cpu;
 
-   if (!sev_active())
+   if (!prot_guest_has(PATTR_GUEST_MEM_ENCRYPT))
return;
 
for_each_possible_cpu(cpu) {
diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index ad273e5861c1..f7ba78a23dcd 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -16,9 +16,9 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
-#include 
 #include 
 #include 
 
@@ -232,7 +232,7 @@ static void __init kvmclock_init_mem(void)
 * hvclock is shared between the guest and the hypervisor, must
 * be mapped decrypted.
 */
-   if (sev_active()) {
+   if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) {
r = set_memory_decrypted((unsigned long) hvclock_mem,
 1UL << order);
if (r) {
diff --git a/arch/x86/kernel/machine_kexec_64.c 
b/arch/x86/kernel/machine_kexec_64.c
index 8e7b517ad738..66ff788b79c9 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -167,7 +167,7 @@ static int init_transition_pgtable(struct kimage *image, 
pgd_t *pgd)
}
pte = pte_offset_kernel(pmd, vaddr);
 
-   if (sev_active())
+   if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT))
prot = PAGE_KERNEL_EXEC;
 
set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, prot));
@@ -207,7 +207,7 @@ static int init_pgtable(struct kimage *image, unsigned long 
start_pgtable)
level4p = (pgd_t *)__va(start_pgtable);
clear_page(level4p);
 
-   if (sev_active()) {
+   if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) {
info.page_flag   |= _PAGE_ENC;
info.kernpg_flag |= _PAGE_ENC;
}
@@ -570,12 +570,12 @@ void arch_kexec_unprotect_crashkres(void)
  */
 int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp)
 {
-   if (sev_active())
+   if (!prot_guest_has(PATTR_HOST_MEM_ENCRYPT))

[PATCH v2 05/12] x86/sme: Replace occurrences of sme_active() with prot_guest_has()

2021-08-13 Thread Tom Lendacky
Replace occurrences of sme_active() with the more generic prot_guest_has()
using PATTR_HOST_MEM_ENCRYPT, except for in arch/x86/mm/mem_encrypt*.c
where PATTR_SME will be used. If future support is added for other memory
encryption technologies, the use of PATTR_HOST_MEM_ENCRYPT can be
updated, as required, to use PATTR_SME.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Cc: Joerg Roedel 
Cc: Will Deacon 
Reviewed-by: Joerg Roedel 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/kexec.h |  2 +-
 arch/x86/include/asm/mem_encrypt.h   |  2 --
 arch/x86/kernel/machine_kexec_64.c   |  3 ++-
 arch/x86/kernel/pci-swiotlb.c|  9 -
 arch/x86/kernel/relocate_kernel_64.S |  2 +-
 arch/x86/mm/ioremap.c|  6 +++---
 arch/x86/mm/mem_encrypt.c| 10 +-
 arch/x86/mm/mem_encrypt_identity.c   |  3 ++-
 arch/x86/realmode/init.c |  5 +++--
 drivers/iommu/amd/init.c |  7 ---
 10 files changed, 25 insertions(+), 24 deletions(-)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 0a6e34b07017..11b7c06e2828 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -129,7 +129,7 @@ relocate_kernel(unsigned long indirection_page,
unsigned long page_list,
unsigned long start_address,
unsigned int preserve_context,
-   unsigned int sme_active);
+   unsigned int host_mem_enc_active);
 #endif
 
 #define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index a46d47662772..956338406cec 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -50,7 +50,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
 void __init mem_encrypt_init(void);
 
 void __init sev_es_init_vc_handling(void);
-bool sme_active(void);
 bool sev_active(void);
 bool sev_es_active(void);
 bool amd_prot_guest_has(unsigned int attr);
@@ -76,7 +75,6 @@ static inline void __init sme_encrypt_kernel(struct 
boot_params *bp) { }
 static inline void __init sme_enable(struct boot_params *bp) { }
 
 static inline void sev_es_init_vc_handling(void) { }
-static inline bool sme_active(void) { return false; }
 static inline bool sev_active(void) { return false; }
 static inline bool sev_es_active(void) { return false; }
 static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
diff --git a/arch/x86/kernel/machine_kexec_64.c 
b/arch/x86/kernel/machine_kexec_64.c
index 131f30fdcfbd..8e7b517ad738 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -17,6 +17,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -358,7 +359,7 @@ void machine_kexec(struct kimage *image)
   (unsigned long)page_list,
   image->start,
   image->preserve_context,
-  sme_active());
+  prot_guest_has(PATTR_HOST_MEM_ENCRYPT));
 
 #ifdef CONFIG_KEXEC_JUMP
if (image->preserve_context)
diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
index c2cfa5e7c152..bd9a9cfbc9a2 100644
--- a/arch/x86/kernel/pci-swiotlb.c
+++ b/arch/x86/kernel/pci-swiotlb.c
@@ -6,7 +6,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 
 #include 
 #include 
@@ -45,11 +45,10 @@ int __init pci_swiotlb_detect_4gb(void)
swiotlb = 1;
 
/*
-* If SME is active then swiotlb will be set to 1 so that bounce
-* buffers are allocated and used for devices that do not support
-* the addressing range required for the encryption mask.
+* Set swiotlb to 1 so that bounce buffers are allocated and used for
+* devices that can't support DMA to encrypted memory.
 */
-   if (sme_active())
+   if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT))
swiotlb = 1;
 
return swiotlb;
diff --git a/arch/x86/kernel/relocate_kernel_64.S 
b/arch/x86/kernel/relocate_kernel_64.S
index c53271aebb64..c8fe74a28143 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -47,7 +47,7 @@ SYM_CODE_START_NOALIGN(relocate_kernel)
 * %rsi page_list
 * %rdx start address
 * %rcx preserve_context
-* %r8  sme_active
+* %r8  host_mem_enc_active
 */
 
/* Save the CPU context, used for jumping back */
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index ccff76cedd8f..583afd54c7e1 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -14,7 +14,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 #include 
 
@@ -703,7 +703,7 @@ bool arch_memremap_can_ram_remap(resource_size_t phys_addr, 

[PATCH v2 04/12] powerpc/pseries/svm: Add a powerpc version of prot_guest_has()

2021-08-13 Thread Tom Lendacky
Introduce a powerpc version of the prot_guest_has() function. This will
be used to replace the powerpc mem_encrypt_active() implementation, so
the implementation will initially only support the PATTR_MEM_ENCRYPT
attribute.

Cc: Michael Ellerman 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Signed-off-by: Tom Lendacky 
---
 arch/powerpc/include/asm/protected_guest.h | 30 ++
 arch/powerpc/platforms/pseries/Kconfig |  1 +
 2 files changed, 31 insertions(+)
 create mode 100644 arch/powerpc/include/asm/protected_guest.h

diff --git a/arch/powerpc/include/asm/protected_guest.h 
b/arch/powerpc/include/asm/protected_guest.h
new file mode 100644
index ..ce55c2c7e534
--- /dev/null
+++ b/arch/powerpc/include/asm/protected_guest.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Protected Guest (and Host) Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#ifndef _POWERPC_PROTECTED_GUEST_H
+#define _POWERPC_PROTECTED_GUEST_H
+
+#include 
+
+#ifndef __ASSEMBLY__
+
+static inline bool prot_guest_has(unsigned int attr)
+{
+   switch (attr) {
+   case PATTR_MEM_ENCRYPT:
+   return is_secure_guest();
+
+   default:
+   return false;
+   }
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _POWERPC_PROTECTED_GUEST_H */
diff --git a/arch/powerpc/platforms/pseries/Kconfig 
b/arch/powerpc/platforms/pseries/Kconfig
index 5e037df2a3a1..8ce5417d6feb 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -159,6 +159,7 @@ config PPC_SVM
select SWIOTLB
select ARCH_HAS_MEM_ENCRYPT
select ARCH_HAS_FORCE_DMA_UNENCRYPTED
+   select ARCH_HAS_PROTECTED_GUEST
help
 There are certain POWER platforms which support secure guests using
 the Protected Execution Facility, with the help of an Ultravisor
-- 
2.32.0



[PATCH v2 03/12] x86/sev: Add an x86 version of prot_guest_has()

2021-08-13 Thread Tom Lendacky
Introduce an x86 version of the prot_guest_has() function. This will be
used in the more generic x86 code to replace vendor specific calls like
sev_active(), etc.

While the name suggests this is intended mainly for guests, it will
also be used for host memory encryption checks in place of sme_active().

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Reviewed-by: Joerg Roedel 
Co-developed-by: Andi Kleen 
Signed-off-by: Andi Kleen 
Co-developed-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Tom Lendacky 
---
 arch/x86/Kconfig   |  1 +
 arch/x86/include/asm/mem_encrypt.h |  2 ++
 arch/x86/include/asm/protected_guest.h | 29 ++
 arch/x86/mm/mem_encrypt.c  | 25 ++
 include/linux/protected_guest.h|  5 +
 5 files changed, 62 insertions(+)
 create mode 100644 arch/x86/include/asm/protected_guest.h

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 421fa9e38c60..82e5fb713261 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1514,6 +1514,7 @@ config AMD_MEM_ENCRYPT
select ARCH_HAS_FORCE_DMA_UNENCRYPTED
select INSTRUCTION_DECODER
select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
+   select ARCH_HAS_PROTECTED_GUEST
help
  Say yes to enable support for the encryption of system memory.
  This requires an AMD processor that supports Secure Memory
diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 9c80c68d75b5..a46d47662772 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -53,6 +53,7 @@ void __init sev_es_init_vc_handling(void);
 bool sme_active(void);
 bool sev_active(void);
 bool sev_es_active(void);
+bool amd_prot_guest_has(unsigned int attr);
 
 #define __bss_decrypted __section(".bss..decrypted")
 
@@ -78,6 +79,7 @@ static inline void sev_es_init_vc_handling(void) { }
 static inline bool sme_active(void) { return false; }
 static inline bool sev_active(void) { return false; }
 static inline bool sev_es_active(void) { return false; }
+static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
 
 static inline int __init
 early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 
0; }
diff --git a/arch/x86/include/asm/protected_guest.h 
b/arch/x86/include/asm/protected_guest.h
new file mode 100644
index ..51e4eefd9542
--- /dev/null
+++ b/arch/x86/include/asm/protected_guest.h
@@ -0,0 +1,29 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Protected Guest (and Host) Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#ifndef _X86_PROTECTED_GUEST_H
+#define _X86_PROTECTED_GUEST_H
+
+#include 
+
+#ifndef __ASSEMBLY__
+
+static inline bool prot_guest_has(unsigned int attr)
+{
+#ifdef CONFIG_AMD_MEM_ENCRYPT
+   if (sme_me_mask)
+   return amd_prot_guest_has(attr);
+#endif
+
+   return false;
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _X86_PROTECTED_GUEST_H */
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ff08dc463634..edc67ddf065d 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -389,6 +390,30 @@ bool noinstr sev_es_active(void)
return sev_status & MSR_AMD64_SEV_ES_ENABLED;
 }
 
+bool amd_prot_guest_has(unsigned int attr)
+{
+   switch (attr) {
+   case PATTR_MEM_ENCRYPT:
+   return sme_me_mask != 0;
+
+   case PATTR_SME:
+   case PATTR_HOST_MEM_ENCRYPT:
+   return sme_active();
+
+   case PATTR_SEV:
+   case PATTR_GUEST_MEM_ENCRYPT:
+   return sev_active();
+
+   case PATTR_SEV_ES:
+   case PATTR_GUEST_PROT_STATE:
+   return sev_es_active();
+
+   default:
+   return false;
+   }
+}
+EXPORT_SYMBOL_GPL(amd_prot_guest_has);
+
 /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
 bool force_dma_unencrypted(struct device *dev)
 {
diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h
index 43d4dde94793..5ddef1b6a2ea 100644
--- a/include/linux/protected_guest.h
+++ b/include/linux/protected_guest.h
@@ -20,6 +20,11 @@
 #define PATTR_GUEST_MEM_ENCRYPT2   /* Guest encrypted 
memory */
 #define PATTR_GUEST_PROT_STATE 3   /* Guest encrypted state */
 
+/* 0x800 - 0x8ff reserved for AMD */
+#define PATTR_SME  0x800
+#define PATTR_SEV  0x801
+#define PATTR_SEV_ES   0x802
+
 #ifdef CONFIG_ARCH_HAS_PROTECTED_GUEST
 
 #include 
-- 
2.32.0



[PATCH v2 02/12] mm: Introduce a function to check for virtualization protection features

2021-08-13 Thread Tom Lendacky
In prep for other protected virtualization technologies, introduce a
generic helper function, prot_guest_has(), that can be used to check
for specific protection attributes, like memory encryption. This is
intended to eliminate having to add multiple technology-specific checks
to the code (e.g. if (sev_active() || tdx_active())).

Reviewed-by: Joerg Roedel 
Co-developed-by: Andi Kleen 
Signed-off-by: Andi Kleen 
Co-developed-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Tom Lendacky 
---
 arch/Kconfig|  3 +++
 include/linux/protected_guest.h | 35 +
 2 files changed, 38 insertions(+)
 create mode 100644 include/linux/protected_guest.h

diff --git a/arch/Kconfig b/arch/Kconfig
index 98db63496bab..bd4f60c581f1 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1231,6 +1231,9 @@ config RELR
 config ARCH_HAS_MEM_ENCRYPT
bool
 
+config ARCH_HAS_PROTECTED_GUEST
+   bool
+
 config HAVE_SPARSE_SYSCALL_NR
bool
help
diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h
new file mode 100644
index ..43d4dde94793
--- /dev/null
+++ b/include/linux/protected_guest.h
@@ -0,0 +1,35 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Protected Guest (and Host) Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#ifndef _PROTECTED_GUEST_H
+#define _PROTECTED_GUEST_H
+
+#ifndef __ASSEMBLY__
+
+#include 
+#include 
+
+#define PATTR_MEM_ENCRYPT  0   /* Encrypted memory */
+#define PATTR_HOST_MEM_ENCRYPT 1   /* Host encrypted memory */
+#define PATTR_GUEST_MEM_ENCRYPT2   /* Guest encrypted 
memory */
+#define PATTR_GUEST_PROT_STATE 3   /* Guest encrypted state */
+
+#ifdef CONFIG_ARCH_HAS_PROTECTED_GUEST
+
+#include 
+
+#else  /* !CONFIG_ARCH_HAS_PROTECTED_GUEST */
+
+static inline bool prot_guest_has(unsigned int attr) { return false; }
+
+#endif /* CONFIG_ARCH_HAS_PROTECTED_GUEST */
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _PROTECTED_GUEST_H */
-- 
2.32.0



[PATCH v2 01/12] x86/ioremap: Selectively build arch override encryption functions

2021-08-13 Thread Tom Lendacky
In prep for other uses of the prot_guest_has() function besides AMD's
memory encryption support, selectively build the AMD memory encryption
architecture override functions only when CONFIG_AMD_MEM_ENCRYPT=y. These
functions are:
- early_memremap_pgprot_adjust()
- arch_memremap_can_ram_remap()

Additionally, routines that are only invoked by these architecture
override functions can also be conditionally built. These functions are:
- memremap_should_map_decrypted()
- memremap_is_efi_data()
- memremap_is_setup_data()
- early_memremap_is_setup_data()

And finally, phys_mem_access_encrypted() is conditionally built as well,
but requires a static inline version of it when CONFIG_AMD_MEM_ENCRYPT is
not set.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/io.h | 8 
 arch/x86/mm/ioremap.c | 2 +-
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/io.h b/arch/x86/include/asm/io.h
index 841a5d104afa..5c6a4af0b911 100644
--- a/arch/x86/include/asm/io.h
+++ b/arch/x86/include/asm/io.h
@@ -391,6 +391,7 @@ extern void arch_io_free_memtype_wc(resource_size_t start, 
resource_size_t size)
 #define arch_io_reserve_memtype_wc arch_io_reserve_memtype_wc
 #endif
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
 extern bool arch_memremap_can_ram_remap(resource_size_t offset,
unsigned long size,
unsigned long flags);
@@ -398,6 +399,13 @@ extern bool arch_memremap_can_ram_remap(resource_size_t 
offset,
 
 extern bool phys_mem_access_encrypted(unsigned long phys_addr,
  unsigned long size);
+#else
+static inline bool phys_mem_access_encrypted(unsigned long phys_addr,
+unsigned long size)
+{
+   return true;
+}
+#endif
 
 /**
  * iosubmit_cmds512 - copy data to single MMIO location, in 512-bit units
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 60ade7dd71bd..ccff76cedd8f 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -508,6 +508,7 @@ void unxlate_dev_mem_ptr(phys_addr_t phys, void *addr)
memunmap((void *)((unsigned long)addr & PAGE_MASK));
 }
 
+#ifdef CONFIG_AMD_MEM_ENCRYPT
 /*
  * Examine the physical address to determine if it is an area of memory
  * that should be mapped decrypted.  If the memory is not part of the
@@ -746,7 +747,6 @@ bool phys_mem_access_encrypted(unsigned long phys_addr, 
unsigned long size)
return arch_memremap_can_ram_remap(phys_addr, size, 0);
 }
 
-#ifdef CONFIG_AMD_MEM_ENCRYPT
 /* Remap memory with encryption */
 void __init *early_memremap_encrypted(resource_size_t phys_addr,
  unsigned long size)
-- 
2.32.0



[PATCH v2 00/12] Implement generic prot_guest_has() helper function

2021-08-13 Thread Tom Lendacky
This patch series provides a generic helper function, prot_guest_has(),
to replace the sme_active(), sev_active(), sev_es_active() and
mem_encrypt_active() functions.

It is expected that as new protected virtualization technologies are
added to the kernel, they can all be covered by a single function call
instead of a collection of specific function calls all called from the
same locations.

The powerpc and s390 patches have been compile tested only. Can the
folks copied on this series verify that nothing breaks for them.

Cc: Andi Kleen 
Cc: Andy Lutomirski 
Cc: Ard Biesheuvel 
Cc: Baoquan He 
Cc: Benjamin Herrenschmidt 
Cc: Borislav Petkov 
Cc: Christian Borntraeger 
Cc: Daniel Vetter 
Cc: Dave Hansen 
Cc: Dave Young 
Cc: David Airlie 
Cc: Heiko Carstens 
Cc: Ingo Molnar 
Cc: Joerg Roedel 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: Michael Ellerman 
Cc: Paul Mackerras 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Thomas Zimmermann 
Cc: Vasily Gorbik 
Cc: VMware Graphics 
Cc: Will Deacon 

---

Patches based on:
  https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master
  0b52902cd2d9 ("Merge branch 'efi/urgent'")

Changes since v1:
- Move some arch ioremap functions within #ifdef CONFIG_AMD_MEM_ENCRYPT
  in prep for use of prot_guest_has() by TDX.
- Add type includes to the the protected_guest.h header file to prevent
  build errors outside of x86.
- Make amd_prot_guest_has() EXPORT_SYMBOL_GPL
- Use amd_prot_guest_has() in place of checking sme_me_mask in the
  arch/x86/mm/mem_encrypt.c file.

Tom Lendacky (12):
  x86/ioremap: Selectively build arch override encryption functions
  mm: Introduce a function to check for virtualization protection
features
  x86/sev: Add an x86 version of prot_guest_has()
  powerpc/pseries/svm: Add a powerpc version of prot_guest_has()
  x86/sme: Replace occurrences of sme_active() with prot_guest_has()
  x86/sev: Replace occurrences of sev_active() with prot_guest_has()
  x86/sev: Replace occurrences of sev_es_active() with prot_guest_has()
  treewide: Replace the use of mem_encrypt_active() with
prot_guest_has()
  mm: Remove the now unused mem_encrypt_active() function
  x86/sev: Remove the now unused mem_encrypt_active() function
  powerpc/pseries/svm: Remove the now unused mem_encrypt_active()
function
  s390/mm: Remove the now unused mem_encrypt_active() function

 arch/Kconfig   |  3 ++
 arch/powerpc/include/asm/mem_encrypt.h |  5 --
 arch/powerpc/include/asm/protected_guest.h | 30 +++
 arch/powerpc/platforms/pseries/Kconfig |  1 +
 arch/s390/include/asm/mem_encrypt.h|  2 -
 arch/x86/Kconfig   |  1 +
 arch/x86/include/asm/io.h  |  8 +++
 arch/x86/include/asm/kexec.h   |  2 +-
 arch/x86/include/asm/mem_encrypt.h | 13 +
 arch/x86/include/asm/protected_guest.h | 29 +++
 arch/x86/kernel/crash_dump_64.c|  4 +-
 arch/x86/kernel/head64.c   |  4 +-
 arch/x86/kernel/kvm.c  |  3 +-
 arch/x86/kernel/kvmclock.c |  4 +-
 arch/x86/kernel/machine_kexec_64.c | 19 +++
 arch/x86/kernel/pci-swiotlb.c  |  9 ++--
 arch/x86/kernel/relocate_kernel_64.S   |  2 +-
 arch/x86/kernel/sev.c  |  6 +--
 arch/x86/kvm/svm/svm.c |  3 +-
 arch/x86/mm/ioremap.c  | 18 +++
 arch/x86/mm/mem_encrypt.c  | 60 +++---
 arch/x86/mm/mem_encrypt_identity.c |  3 +-
 arch/x86/mm/pat/set_memory.c   |  3 +-
 arch/x86/platform/efi/efi_64.c |  9 ++--
 arch/x86/realmode/init.c   |  8 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c|  4 +-
 drivers/gpu/drm/drm_cache.c|  4 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c|  4 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_msg.c|  6 +--
 drivers/iommu/amd/init.c   |  7 +--
 drivers/iommu/amd/iommu.c  |  3 +-
 drivers/iommu/amd/iommu_v2.c   |  3 +-
 drivers/iommu/iommu.c  |  3 +-
 fs/proc/vmcore.c   |  6 +--
 include/linux/mem_encrypt.h|  4 --
 include/linux/protected_guest.h| 40 +++
 kernel/dma/swiotlb.c   |  4 +-
 37 files changed, 232 insertions(+), 105 deletions(-)
 create mode 100644 arch/powerpc/include/asm/protected_guest.h
 create mode 100644 arch/x86/include/asm/protected_guest.h
 create mode 100644 include/linux/protected_guest.h

-- 
2.32.0



Re: [PATCH 07/11] treewide: Replace the use of mem_encrypt_active() with prot_guest_has()

2021-08-11 Thread Tom Lendacky
On 8/11/21 7:19 AM, Kirill A. Shutemov wrote:
> On Tue, Aug 10, 2021 at 02:48:54PM -0500, Tom Lendacky wrote:
>> On 8/10/21 1:45 PM, Kuppuswamy, Sathyanarayanan wrote:
>>>
>>>
>>> On 7/27/21 3:26 PM, Tom Lendacky wrote:
>>>> diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
>>>> index de01903c3735..cafed6456d45 100644
>>>> --- a/arch/x86/kernel/head64.c
>>>> +++ b/arch/x86/kernel/head64.c
>>>> @@ -19,7 +19,7 @@
>>>>   #include 
>>>>   #include 
>>>>   #include 
>>>> -#include 
>>>> +#include 
>>>>   #include 
>>>>     #include 
>>>> @@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long
>>>> physaddr,
>>>>    * there is no need to zero it after changing the memory encryption
>>>>    * attribute.
>>>>    */
>>>> -    if (mem_encrypt_active()) {
>>>> +    if (prot_guest_has(PATTR_MEM_ENCRYPT)) {
>>>>   vaddr = (unsigned long)__start_bss_decrypted;
>>>>   vaddr_end = (unsigned long)__end_bss_decrypted;
>>>
>>>
>>> Since this change is specific to AMD, can you replace PATTR_MEM_ENCRYPT with
>>> prot_guest_has(PATTR_SME) || prot_guest_has(PATTR_SEV). It is not used in
>>> TDX.
>>
>> This is a direct replacement for now.
> 
> With current implementation of prot_guest_has() for TDX it breaks boot for
> me.
> 
> Looking at code agains, now I *think* the reason is accessing a global
> variable from __startup_64() inside TDX version of prot_guest_has().
> 
> __startup_64() is special. If you access any global variable you need to
> use fixup_pointer(). See comment before __startup_64().
> 
> I'm not sure how you get away with accessing sme_me_mask directly from
> there. Any clues? Maybe just a luck and complier generates code just right
> for your case, I donno.

Hmm... yeah, could be that the compiler is using rip-relative addressing
for it because it lives in the .data section?

For the static variables in mem_encrypt_identity.c I did an assembler rip
relative LEA, but probably could have passed physaddr to sme_enable() and
used a fixup_pointer() style function, instead.

> 
> A separate point is that TDX version of prot_guest_has() relies on
> cpu_feature_enabled() which is not ready at this point.

Does TDX have to do anything special to make memory able to be shared with
the hypervisor?  You might have to use something that is available earlier
than cpu_feature_enabled() in that case (should you eventually support
kvmclock).

> 
> I think __bss_decrypted fixup has to be done if sme_me_mask is non-zero.
> Or just do it uncoditionally because it's NOP for sme_me_mask == 0.

For SNP, we'll have to additionally call the HV to update the RMP to make
the memory shared. But that could also be done unconditionally since the
early_snp_set_memory_shared() routine will check for SNP before doing
anything.

Thanks,
Tom

> 
>> I think the change you're requesting
>> should be done as part of the TDX support patches so it's clear why it is
>> being changed.
>>
>> But, wouldn't TDX still need to do something with this shared/unencrypted
>> area, though? Or since it is shared, there's actually nothing you need to
>> do (the bss decrpyted section exists even if CONFIG_AMD_MEM_ENCRYPT is not
>> configured)?
> 
> AFAICS, only kvmclock uses __bss_decrypted. We don't enable kvmclock in
> TDX at the moment. It may change in the future.
> 


Re: [PATCH 01/11] mm: Introduce a function to check for virtualization protection features

2021-08-11 Thread Tom Lendacky
On 8/11/21 9:53 AM, Kuppuswamy, Sathyanarayanan wrote:
> On 7/27/21 3:26 PM, Tom Lendacky wrote:
>> diff --git a/include/linux/protected_guest.h
>> b/include/linux/protected_guest.h
>> new file mode 100644
>> index ..f8ed7b72967b
>> --- /dev/null
>> +++ b/include/linux/protected_guest.h
>> @@ -0,0 +1,32 @@
>> +/* SPDX-License-Identifier: GPL-2.0-only */
>> +/*
>> + * Protected Guest (and Host) Capability checks
>> + *
>> + * Copyright (C) 2021 Advanced Micro Devices, Inc.
>> + *
>> + * Author: Tom Lendacky
>> + */
>> +
>> +#ifndef _PROTECTED_GUEST_H
>> +#define _PROTECTED_GUEST_H
>> +
>> +#ifndef __ASSEMBLY__
> 
> Can you include headers for bool type and false definition?

Can do.

Thanks,
Tom

> 
> --- a/include/linux/protected_guest.h
> +++ b/include/linux/protected_guest.h
> @@ -12,6 +12,9 @@
> 
>  #ifndef __ASSEMBLY__
> 
> +#include 
> +#include 
> 
> Otherwise, I see following errors in multi-config auto testing.
> 
> include/linux/protected_guest.h:40:15: error: unknown type name 'bool'
> include/linux/protected_guest.h:40:63: error: 'false' undeclared (first
> use in this functi
> 


Re: [PATCH 07/11] treewide: Replace the use of mem_encrypt_active() with prot_guest_has()

2021-08-10 Thread Tom Lendacky
On 8/10/21 1:45 PM, Kuppuswamy, Sathyanarayanan wrote:
> 
> 
> On 7/27/21 3:26 PM, Tom Lendacky wrote:
>> diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
>> index de01903c3735..cafed6456d45 100644
>> --- a/arch/x86/kernel/head64.c
>> +++ b/arch/x86/kernel/head64.c
>> @@ -19,7 +19,7 @@
>>   #include 
>>   #include 
>>   #include 
>> -#include 
>> +#include 
>>   #include 
>>     #include 
>> @@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long
>> physaddr,
>>    * there is no need to zero it after changing the memory encryption
>>    * attribute.
>>    */
>> -    if (mem_encrypt_active()) {
>> +    if (prot_guest_has(PATTR_MEM_ENCRYPT)) {
>>   vaddr = (unsigned long)__start_bss_decrypted;
>>   vaddr_end = (unsigned long)__end_bss_decrypted;
> 
> 
> Since this change is specific to AMD, can you replace PATTR_MEM_ENCRYPT with
> prot_guest_has(PATTR_SME) || prot_guest_has(PATTR_SEV). It is not used in
> TDX.

This is a direct replacement for now. I think the change you're requesting
should be done as part of the TDX support patches so it's clear why it is
being changed.

But, wouldn't TDX still need to do something with this shared/unencrypted
area, though? Or since it is shared, there's actually nothing you need to
do (the bss decrpyted section exists even if CONFIG_AMD_MEM_ENCRYPT is not
configured)?

Thanks,
Tom

> 


Re: [PATCH 00/11] Implement generic prot_guest_has() helper function

2021-08-09 Thread Tom Lendacky
On 8/8/21 8:41 PM, Kuppuswamy, Sathyanarayanan wrote:
> Hi Tom,
> 
> On 7/27/21 3:26 PM, Tom Lendacky wrote:
>> This patch series provides a generic helper function, prot_guest_has(),
>> to replace the sme_active(), sev_active(), sev_es_active() and
>> mem_encrypt_active() functions.
>>
>> It is expected that as new protected virtualization technologies are
>> added to the kernel, they can all be covered by a single function call
>> instead of a collection of specific function calls all called from the
>> same locations.
>>
>> The powerpc and s390 patches have been compile tested only. Can the
>> folks copied on this series verify that nothing breaks for them.
> 
> With this patch set, select ARCH_HAS_PROTECTED_GUEST and set
> CONFIG_AMD_MEM_ENCRYPT=n, creates following error.
> 
> ld: arch/x86/mm/ioremap.o: in function `early_memremap_is_setup_data':
> arch/x86/mm/ioremap.c:672: undefined reference to `early_memremap_decrypted'
> 
> It looks like early_memremap_is_setup_data() is not protected with
> appropriate config.

Ok, thanks for finding that. I'll fix that.

Thanks,
Tom

> 
> 
>>
>> Cc: Andi Kleen 
>> Cc: Andy Lutomirski 
>> Cc: Ard Biesheuvel 
>> Cc: Baoquan He 
>> Cc: Benjamin Herrenschmidt 
>> Cc: Borislav Petkov 
>> Cc: Christian Borntraeger 
>> Cc: Daniel Vetter 
>> Cc: Dave Hansen 
>> Cc: Dave Young 
>> Cc: David Airlie 
>> Cc: Heiko Carstens 
>> Cc: Ingo Molnar 
>> Cc: Joerg Roedel 
>> Cc: Maarten Lankhorst 
>> Cc: Maxime Ripard 
>> Cc: Michael Ellerman 
>> Cc: Paul Mackerras 
>> Cc: Peter Zijlstra 
>> Cc: Thomas Gleixner 
>> Cc: Thomas Zimmermann 
>> Cc: Vasily Gorbik 
>> Cc: VMware Graphics 
>> Cc: Will Deacon 
>>
>> ---
>>
>> Patches based on:
>>   
>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Ftip%2Ftip.gitdata=04%7C01%7Cthomas.lendacky%40amd.com%7C563b5e30a3254f6739aa08d95ad6e242%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637640701228434514%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=vx9v4EmYqVTsJ7KSr97gQaBWJ%2Fq%2BE9NOzXMhe3Fp7T8%3Dreserved=0
>> master
>>    commit 79e920060fa7 ("Merge branch 'WIP/fixes'")
>>
>> Tom Lendacky (11):
>>    mm: Introduce a function to check for virtualization protection
>>  features
>>    x86/sev: Add an x86 version of prot_guest_has()
>>    powerpc/pseries/svm: Add a powerpc version of prot_guest_has()
>>    x86/sme: Replace occurrences of sme_active() with prot_guest_has()
>>    x86/sev: Replace occurrences of sev_active() with prot_guest_has()
>>    x86/sev: Replace occurrences of sev_es_active() with prot_guest_has()
>>    treewide: Replace the use of mem_encrypt_active() with
>>  prot_guest_has()
>>    mm: Remove the now unused mem_encrypt_active() function
>>    x86/sev: Remove the now unused mem_encrypt_active() function
>>    powerpc/pseries/svm: Remove the now unused mem_encrypt_active()
>>  function
>>    s390/mm: Remove the now unused mem_encrypt_active() function
>>
>>   arch/Kconfig   |  3 ++
>>   arch/powerpc/include/asm/mem_encrypt.h |  5 --
>>   arch/powerpc/include/asm/protected_guest.h | 30 +++
>>   arch/powerpc/platforms/pseries/Kconfig |  1 +
>>   arch/s390/include/asm/mem_encrypt.h    |  2 -
>>   arch/x86/Kconfig   |  1 +
>>   arch/x86/include/asm/kexec.h   |  2 +-
>>   arch/x86/include/asm/mem_encrypt.h | 13 +
>>   arch/x86/include/asm/protected_guest.h | 27 ++
>>   arch/x86/kernel/crash_dump_64.c    |  4 +-
>>   arch/x86/kernel/head64.c   |  4 +-
>>   arch/x86/kernel/kvm.c  |  3 +-
>>   arch/x86/kernel/kvmclock.c |  4 +-
>>   arch/x86/kernel/machine_kexec_64.c | 19 +++
>>   arch/x86/kernel/pci-swiotlb.c  |  9 ++--
>>   arch/x86/kernel/relocate_kernel_64.S   |  2 +-
>>   arch/x86/kernel/sev.c  |  6 +--
>>   arch/x86/kvm/svm/svm.c |  3 +-
>>   arch/x86/mm/ioremap.c  | 16 +++---
>>   arch/x86/mm/mem_encrypt.c  | 60 +++---
>>   arch/x86/mm/mem_encrypt_identity.c |  3 +-
>>   arch/x86/mm/pat/set_memory.c   |  3 +-
>>   arch/x86/platform/efi/efi_64.c |  9 ++--
>>   arch/x86/realmode/init.c   |  8 +

Re: [PATCH 07/11] treewide: Replace the use of mem_encrypt_active() with prot_guest_has()

2021-08-09 Thread Tom Lendacky
On 8/2/21 7:42 AM, Christophe Leroy wrote:
> 
> 
> Le 28/07/2021 à 00:26, Tom Lendacky a écrit :
>> Replace occurrences of mem_encrypt_active() with calls to prot_guest_has()
>> with the PATTR_MEM_ENCRYPT attribute.
> 
> 
> What about
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpatchwork.ozlabs.org%2Fproject%2Flinuxppc-dev%2Fpatch%2F20210730114231.23445-1-will%40kernel.org%2Fdata=04%7C01%7Cthomas.lendacky%40amd.com%7C1198d62463e04a27be5908d955b30433%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637635049667233612%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=Erpu4Du05sVYkYuAfTkXdLvq48%2FlfLS2q%2FZW8DG3tFw%3Dreserved=0>
>  ?

Ah, looks like that just went into the PPC tree and isn't part of the tip
tree. I'll have to look into how to handle that one.

Thanks,
Tom

> 
> Christophe
> 
> 
>>
>> Cc: Thomas Gleixner 
>> Cc: Ingo Molnar 
>> Cc: Borislav Petkov 
>> Cc: Dave Hansen 
>> Cc: Andy Lutomirski 
>> Cc: Peter Zijlstra 
>> Cc: David Airlie 
>> Cc: Daniel Vetter 
>> Cc: Maarten Lankhorst 
>> Cc: Maxime Ripard 
>> Cc: Thomas Zimmermann 
>> Cc: VMware Graphics 
>> Cc: Joerg Roedel 
>> Cc: Will Deacon 
>> Cc: Dave Young 
>> Cc: Baoquan He 
>> Signed-off-by: Tom Lendacky 
>> ---
>>   arch/x86/kernel/head64.c    | 4 ++--
>>   arch/x86/mm/ioremap.c   | 4 ++--
>>   arch/x86/mm/mem_encrypt.c   | 5 ++---
>>   arch/x86/mm/pat/set_memory.c    | 3 ++-
>>   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +++-
>>   drivers/gpu/drm/drm_cache.c | 4 ++--
>>   drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 ++--
>>   drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +++---
>>   drivers/iommu/amd/iommu.c   | 3 ++-
>>   drivers/iommu/amd/iommu_v2.c    | 3 ++-
>>   drivers/iommu/iommu.c   | 3 ++-
>>   fs/proc/vmcore.c    | 6 +++---
>>   kernel/dma/swiotlb.c    | 4 ++--
>>   13 files changed, 29 insertions(+), 24 deletions(-)
>>
>> diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
>> index de01903c3735..cafed6456d45 100644
>> --- a/arch/x86/kernel/head64.c
>> +++ b/arch/x86/kernel/head64.c
>> @@ -19,7 +19,7 @@
>>   #include 
>>   #include 
>>   #include 
>> -#include 
>> +#include 
>>   #include 
>>     #include 
>> @@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long
>> physaddr,
>>    * there is no need to zero it after changing the memory encryption
>>    * attribute.
>>    */
>> -    if (mem_encrypt_active()) {
>> +    if (prot_guest_has(PATTR_MEM_ENCRYPT)) {
>>   vaddr = (unsigned long)__start_bss_decrypted;
>>   vaddr_end = (unsigned long)__end_bss_decrypted;
>>   for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
>> diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
>> index 0f2d5ace5986..5e1c1f5cbbe8 100644
>> --- a/arch/x86/mm/ioremap.c
>> +++ b/arch/x86/mm/ioremap.c
>> @@ -693,7 +693,7 @@ static bool __init
>> early_memremap_is_setup_data(resource_size_t phys_addr,
>>   bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned
>> long size,
>>    unsigned long flags)
>>   {
>> -    if (!mem_encrypt_active())
>> +    if (!prot_guest_has(PATTR_MEM_ENCRYPT))
>>   return true;
>>     if (flags & MEMREMAP_ENC)
>> @@ -723,7 +723,7 @@ pgprot_t __init
>> early_memremap_pgprot_adjust(resource_size_t phys_addr,
>>   {
>>   bool encrypted_prot;
>>   -    if (!mem_encrypt_active())
>> +    if (!prot_guest_has(PATTR_MEM_ENCRYPT))
>>   return prot;
>>     encrypted_prot = true;
>> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
>> index 451de8e84fce..0f1533dbe81c 100644
>> --- a/arch/x86/mm/mem_encrypt.c
>> +++ b/arch/x86/mm/mem_encrypt.c
>> @@ -364,8 +364,7 @@ int __init early_set_memory_encrypted(unsigned long
>> vaddr, unsigned long size)
>>   /*
>>    * SME and SEV are very similar but they are not the same, so there are
>>    * times that the kernel will need to distinguish between SME and SEV.
>> The
>> - * sme_active() and sev_active() functions are used for this.  When a
>> - * distinction isn't needed, the mem_encrypt_active() function can be
>> used.
>> + * sme_active() and sev_active() functions are used for this.
>>    *
>>    * The trampoline code is a good example for th

Re: [PATCH 06/11] x86/sev: Replace occurrences of sev_es_active() with prot_guest_has()

2021-08-09 Thread Tom Lendacky
On 8/2/21 5:45 AM, Joerg Roedel wrote:
> On Tue, Jul 27, 2021 at 05:26:09PM -0500, Tom Lendacky wrote:
>> @@ -48,7 +47,7 @@ static void sme_sev_setup_real_mode(struct 
>> trampoline_header *th)
>>  if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT))
>>  th->flags |= TH_FLAGS_SME_ACTIVE;
>>  
>> -if (sev_es_active()) {
>> +if (prot_guest_has(PATTR_GUEST_PROT_STATE)) {
>>  /*
>>   * Skip the call to verify_cpu() in secondary_startup_64 as it
>>   * will cause #VC exceptions when the AP can't handle them yet.
> 
> Not sure how TDX will handle AP booting, are you sure it needs this
> special setup as well? Otherwise a check for SEV-ES would be better
> instead of the generic PATTR_GUEST_PROT_STATE.

Yes, I'm not sure either. I figure that change can be made, if needed, as
part of the TDX support.

Thanks,
Tom

> 
> Regards,
> 
> Joerg
> 


Re: [PATCH 07/11] treewide: Replace the use of mem_encrypt_active() with prot_guest_has()

2021-08-09 Thread Tom Lendacky
On 7/30/21 5:34 PM, Sean Christopherson wrote:
> On Tue, Jul 27, 2021, Tom Lendacky wrote:
>> @@ -451,7 +450,7 @@ void __init mem_encrypt_free_decrypted_mem(void)
>>   * The unused memory range was mapped decrypted, change the encryption
>>   * attribute from decrypted to encrypted before freeing it.
>>   */
>> -if (mem_encrypt_active()) {
>> +if (sme_me_mask) {
> 
> Any reason this uses sme_me_mask?  The helper it calls, 
> __set_memory_enc_dec(),
> uses prot_guest_has(PATTR_MEM_ENCRYPT) so I assume it's available?

Probably just a slip on my part. I was debating at one point calling the
helper vs. referencing the variables/functions directly in the
mem_encrypt.c file.

Thanks,
Tom

> 
>>  r = set_memory_encrypted(vaddr, npages);
>>  if (r) {
>>  pr_warn("failed to free unused decrypted pages\n");
> 


Re: [PATCH 02/11] x86/sev: Add an x86 version of prot_guest_has()

2021-07-29 Thread Tom Lendacky
On 7/28/21 8:22 AM, Christoph Hellwig wrote:
> On Tue, Jul 27, 2021 at 05:26:05PM -0500, Tom Lendacky via iommu wrote:
>> Introduce an x86 version of the prot_guest_has() function. This will be
>> used in the more generic x86 code to replace vendor specific calls like
>> sev_active(), etc.
>>
>> While the name suggests this is intended mainly for guests, it will
>> also be used for host memory encryption checks in place of sme_active().
>>
>> The amd_prot_guest_has() function does not use EXPORT_SYMBOL_GPL for the
>> same reasons previously stated when changing sme_active(), sev_active and
> 
> None of that applies here as none of the callers get pulled into
> random macros.  The only case of that is sme_me_mask through
> sme_mask, but that's not something this series replaces as far as I can
> tell.

Ok, let me make sure of that and I'll change to EXPORT_SYMBOL_GPL if
that's the case.

Thanks,
Tom

> 


Re: [PATCH 00/11] Implement generic prot_guest_has() helper function

2021-07-27 Thread Tom Lendacky
On 7/27/21 5:26 PM, Tom Lendacky wrote:
> This patch series provides a generic helper function, prot_guest_has(),
> to replace the sme_active(), sev_active(), sev_es_active() and
> mem_encrypt_active() functions.
> 
> It is expected that as new protected virtualization technologies are
> added to the kernel, they can all be covered by a single function call
> instead of a collection of specific function calls all called from the
> same locations.
> 
> The powerpc and s390 patches have been compile tested only. Can the
> folks copied on this series verify that nothing breaks for them.

I wanted to get this out before I head out on vacation at the end of the
week. I'll only be out for a week, but I won't be able to respond to any
feedback until I get back.

I'm still not a fan of the name prot_guest_has() because it is used for
some baremetal checks, but really haven't been able to come up with
anything better. So take it with a grain of salt where the sme_active()
calls are replaced by prot_guest_has().

Also, let me know if the treewide changes in patch #7 need to be further
split out by tree.

Thanks,
Tom

> 
> Cc: Andi Kleen 
> Cc: Andy Lutomirski 
> Cc: Ard Biesheuvel 
> Cc: Baoquan He 
> Cc: Benjamin Herrenschmidt 
> Cc: Borislav Petkov 
> Cc: Christian Borntraeger 
> Cc: Daniel Vetter 
> Cc: Dave Hansen 
> Cc: Dave Young 
> Cc: David Airlie 
> Cc: Heiko Carstens 
> Cc: Ingo Molnar 
> Cc: Joerg Roedel 
> Cc: Maarten Lankhorst 
> Cc: Maxime Ripard 
> Cc: Michael Ellerman 
> Cc: Paul Mackerras 
> Cc: Peter Zijlstra 
> Cc: Thomas Gleixner 
> Cc: Thomas Zimmermann 
> Cc: Vasily Gorbik 
> Cc: VMware Graphics 
> Cc: Will Deacon 
> 
> ---
> 
> Patches based on:
>   https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master
>   commit 79e920060fa7 ("Merge branch 'WIP/fixes'")
> 
> Tom Lendacky (11):
>   mm: Introduce a function to check for virtualization protection
> features
>   x86/sev: Add an x86 version of prot_guest_has()
>   powerpc/pseries/svm: Add a powerpc version of prot_guest_has()
>   x86/sme: Replace occurrences of sme_active() with prot_guest_has()
>   x86/sev: Replace occurrences of sev_active() with prot_guest_has()
>   x86/sev: Replace occurrences of sev_es_active() with prot_guest_has()
>   treewide: Replace the use of mem_encrypt_active() with
> prot_guest_has()
>   mm: Remove the now unused mem_encrypt_active() function
>   x86/sev: Remove the now unused mem_encrypt_active() function
>   powerpc/pseries/svm: Remove the now unused mem_encrypt_active()
> function
>   s390/mm: Remove the now unused mem_encrypt_active() function
> 
>  arch/Kconfig   |  3 ++
>  arch/powerpc/include/asm/mem_encrypt.h |  5 --
>  arch/powerpc/include/asm/protected_guest.h | 30 +++
>  arch/powerpc/platforms/pseries/Kconfig |  1 +
>  arch/s390/include/asm/mem_encrypt.h|  2 -
>  arch/x86/Kconfig   |  1 +
>  arch/x86/include/asm/kexec.h   |  2 +-
>  arch/x86/include/asm/mem_encrypt.h | 13 +
>  arch/x86/include/asm/protected_guest.h | 27 ++
>  arch/x86/kernel/crash_dump_64.c|  4 +-
>  arch/x86/kernel/head64.c   |  4 +-
>  arch/x86/kernel/kvm.c  |  3 +-
>  arch/x86/kernel/kvmclock.c |  4 +-
>  arch/x86/kernel/machine_kexec_64.c | 19 +++
>  arch/x86/kernel/pci-swiotlb.c  |  9 ++--
>  arch/x86/kernel/relocate_kernel_64.S   |  2 +-
>  arch/x86/kernel/sev.c  |  6 +--
>  arch/x86/kvm/svm/svm.c |  3 +-
>  arch/x86/mm/ioremap.c  | 16 +++---
>  arch/x86/mm/mem_encrypt.c  | 60 +++---
>  arch/x86/mm/mem_encrypt_identity.c |  3 +-
>  arch/x86/mm/pat/set_memory.c   |  3 +-
>  arch/x86/platform/efi/efi_64.c |  9 ++--
>  arch/x86/realmode/init.c   |  8 +--
>  drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c|  4 +-
>  drivers/gpu/drm/drm_cache.c|  4 +-
>  drivers/gpu/drm/vmwgfx/vmwgfx_drv.c|  4 +-
>  drivers/gpu/drm/vmwgfx/vmwgfx_msg.c|  6 +--
>  drivers/iommu/amd/init.c   |  7 +--
>  drivers/iommu/amd/iommu.c  |  3 +-
>  drivers/iommu/amd/iommu_v2.c   |  3 +-
>  drivers/iommu/iommu.c  |  3 +-
>  fs/proc/vmcore.c   |  6 +--
>  include/linux/mem_encrypt.h|  4 --
>  include/linux/protected_guest.h| 37 +
>  kernel/dma/swiotlb.c   |  4 +-
>  36 files changed, 218 insertions(+), 104 deletions(-)
>  create mode 100644 arch/powerpc/include/asm/protected_guest.h
>  create mode 100644 arch/x86/include/asm/protected_guest.h
>  create mode 100644 include/linux/protected_guest.h
> 


[PATCH 11/11] s390/mm: Remove the now unused mem_encrypt_active() function

2021-07-27 Thread Tom Lendacky
The mem_encrypt_active() function has been replaced by prot_guest_has(),
so remove the implementation. Since the default implementation of the
prot_guest_has() matches the s390 implementation of mem_encrypt_active(),
prot_guest_has() does not need to be implemented in s390 (the config
option ARCH_HAS_PROTECTED_GUEST is not set).

Cc: Heiko Carstens 
Cc: Vasily Gorbik 
Cc: Christian Borntraeger 
Signed-off-by: Tom Lendacky 
---
 arch/s390/include/asm/mem_encrypt.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/arch/s390/include/asm/mem_encrypt.h 
b/arch/s390/include/asm/mem_encrypt.h
index 2542cbf7e2d1..08a8b96606d7 100644
--- a/arch/s390/include/asm/mem_encrypt.h
+++ b/arch/s390/include/asm/mem_encrypt.h
@@ -4,8 +4,6 @@
 
 #ifndef __ASSEMBLY__
 
-static inline bool mem_encrypt_active(void) { return false; }
-
 int set_memory_encrypted(unsigned long addr, int numpages);
 int set_memory_decrypted(unsigned long addr, int numpages);
 
-- 
2.32.0



[PATCH 10/11] powerpc/pseries/svm: Remove the now unused mem_encrypt_active() function

2021-07-27 Thread Tom Lendacky
The mem_encrypt_active() function has been replaced by prot_guest_has(),
so remove the implementation.

Cc: Michael Ellerman 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Signed-off-by: Tom Lendacky 
---
 arch/powerpc/include/asm/mem_encrypt.h | 5 -
 1 file changed, 5 deletions(-)

diff --git a/arch/powerpc/include/asm/mem_encrypt.h 
b/arch/powerpc/include/asm/mem_encrypt.h
index ba9dab07c1be..2f26b8fc8d29 100644
--- a/arch/powerpc/include/asm/mem_encrypt.h
+++ b/arch/powerpc/include/asm/mem_encrypt.h
@@ -10,11 +10,6 @@
 
 #include 
 
-static inline bool mem_encrypt_active(void)
-{
-   return is_secure_guest();
-}
-
 static inline bool force_dma_unencrypted(struct device *dev)
 {
return is_secure_guest();
-- 
2.32.0



[PATCH 09/11] x86/sev: Remove the now unused mem_encrypt_active() function

2021-07-27 Thread Tom Lendacky
The mem_encrypt_active() function has been replaced by prot_guest_has(),
so remove the implementation.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h | 5 -
 1 file changed, 5 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 797146e0cd6b..94c089e9ea69 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -97,11 +97,6 @@ static inline void mem_encrypt_free_decrypted_mem(void) { }
 
 extern char __start_bss_decrypted[], __end_bss_decrypted[], 
__start_bss_decrypted_unused[];
 
-static inline bool mem_encrypt_active(void)
-{
-   return sme_me_mask;
-}
-
 static inline u64 sme_get_me_mask(void)
 {
return sme_me_mask;
-- 
2.32.0



[PATCH 08/11] mm: Remove the now unused mem_encrypt_active() function

2021-07-27 Thread Tom Lendacky
The mem_encrypt_active() function has been replaced by prot_guest_has(),
so remove the implementation.

Signed-off-by: Tom Lendacky 
---
 include/linux/mem_encrypt.h | 4 
 1 file changed, 4 deletions(-)

diff --git a/include/linux/mem_encrypt.h b/include/linux/mem_encrypt.h
index 5c4a18a91f89..ae4526389261 100644
--- a/include/linux/mem_encrypt.h
+++ b/include/linux/mem_encrypt.h
@@ -16,10 +16,6 @@
 
 #include 
 
-#else  /* !CONFIG_ARCH_HAS_MEM_ENCRYPT */
-
-static inline bool mem_encrypt_active(void) { return false; }
-
 #endif /* CONFIG_ARCH_HAS_MEM_ENCRYPT */
 
 #ifdef CONFIG_AMD_MEM_ENCRYPT
-- 
2.32.0



[PATCH 07/11] treewide: Replace the use of mem_encrypt_active() with prot_guest_has()

2021-07-27 Thread Tom Lendacky
Replace occurrences of mem_encrypt_active() with calls to prot_guest_has()
with the PATTR_MEM_ENCRYPT attribute.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Cc: David Airlie 
Cc: Daniel Vetter 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: Thomas Zimmermann 
Cc: VMware Graphics 
Cc: Joerg Roedel 
Cc: Will Deacon 
Cc: Dave Young 
Cc: Baoquan He 
Signed-off-by: Tom Lendacky 
---
 arch/x86/kernel/head64.c| 4 ++--
 arch/x86/mm/ioremap.c   | 4 ++--
 arch/x86/mm/mem_encrypt.c   | 5 ++---
 arch/x86/mm/pat/set_memory.c| 3 ++-
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 4 +++-
 drivers/gpu/drm/drm_cache.c | 4 ++--
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c | 4 ++--
 drivers/gpu/drm/vmwgfx/vmwgfx_msg.c | 6 +++---
 drivers/iommu/amd/iommu.c   | 3 ++-
 drivers/iommu/amd/iommu_v2.c| 3 ++-
 drivers/iommu/iommu.c   | 3 ++-
 fs/proc/vmcore.c| 6 +++---
 kernel/dma/swiotlb.c| 4 ++--
 13 files changed, 29 insertions(+), 24 deletions(-)

diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index de01903c3735..cafed6456d45 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -19,7 +19,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 
 #include 
@@ -285,7 +285,7 @@ unsigned long __head __startup_64(unsigned long physaddr,
 * there is no need to zero it after changing the memory encryption
 * attribute.
 */
-   if (mem_encrypt_active()) {
+   if (prot_guest_has(PATTR_MEM_ENCRYPT)) {
vaddr = (unsigned long)__start_bss_decrypted;
vaddr_end = (unsigned long)__end_bss_decrypted;
for (; vaddr < vaddr_end; vaddr += PMD_SIZE) {
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 0f2d5ace5986..5e1c1f5cbbe8 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -693,7 +693,7 @@ static bool __init 
early_memremap_is_setup_data(resource_size_t phys_addr,
 bool arch_memremap_can_ram_remap(resource_size_t phys_addr, unsigned long size,
 unsigned long flags)
 {
-   if (!mem_encrypt_active())
+   if (!prot_guest_has(PATTR_MEM_ENCRYPT))
return true;
 
if (flags & MEMREMAP_ENC)
@@ -723,7 +723,7 @@ pgprot_t __init 
early_memremap_pgprot_adjust(resource_size_t phys_addr,
 {
bool encrypted_prot;
 
-   if (!mem_encrypt_active())
+   if (!prot_guest_has(PATTR_MEM_ENCRYPT))
return prot;
 
encrypted_prot = true;
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index 451de8e84fce..0f1533dbe81c 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -364,8 +364,7 @@ int __init early_set_memory_encrypted(unsigned long vaddr, 
unsigned long size)
 /*
  * SME and SEV are very similar but they are not the same, so there are
  * times that the kernel will need to distinguish between SME and SEV. The
- * sme_active() and sev_active() functions are used for this.  When a
- * distinction isn't needed, the mem_encrypt_active() function can be used.
+ * sme_active() and sev_active() functions are used for this.
  *
  * The trampoline code is a good example for this requirement.  Before
  * paging is activated, SME will access all memory as decrypted, but SEV
@@ -451,7 +450,7 @@ void __init mem_encrypt_free_decrypted_mem(void)
 * The unused memory range was mapped decrypted, change the encryption
 * attribute from decrypted to encrypted before freeing it.
 */
-   if (mem_encrypt_active()) {
+   if (sme_me_mask) {
r = set_memory_encrypted(vaddr, npages);
if (r) {
pr_warn("failed to free unused decrypted pages\n");
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index ad8a5c586a35..6925f2bb4be1 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -18,6 +18,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1986,7 +1987,7 @@ static int __set_memory_enc_dec(unsigned long addr, int 
numpages, bool enc)
int ret;
 
/* Nothing to do if memory encryption is not active */
-   if (!mem_encrypt_active())
+   if (!prot_guest_has(PATTR_MEM_ENCRYPT))
return 0;
 
/* Should not be working on unaligned addresses */
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c 
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index abb928894eac..8407224717df 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -38,6 +38,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include "amdgpu.h"
 #include "amdgpu_irq.h"
@@ -1239,7 +1240,8 @@ static int amdgpu_pci_probe(struct pci_dev *p

[PATCH 06/11] x86/sev: Replace occurrences of sev_es_active() with prot_guest_has()

2021-07-27 Thread Tom Lendacky
Replace occurrences of sev_es_active() with the more generic
prot_guest_has() using PATTR_GUEST_PROT_STATE, except for in
arch/x86/kernel/sev*.c and arch/x86/mm/mem_encrypt*.c where PATTR_SEV_ES
will be used. If future support is added for other memory encyrption
techonologies, the use of PATTR_GUEST_PROT_STATE can be updated, as
required, to specifically use PATTR_SEV_ES.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h | 2 --
 arch/x86/kernel/sev.c  | 6 +++---
 arch/x86/mm/mem_encrypt.c  | 7 +++
 arch/x86/realmode/init.c   | 3 +--
 4 files changed, 7 insertions(+), 11 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 7e25de37c148..797146e0cd6b 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -50,7 +50,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
 void __init mem_encrypt_init(void);
 
 void __init sev_es_init_vc_handling(void);
-bool sev_es_active(void);
 bool amd_prot_guest_has(unsigned int attr);
 
 #define __bss_decrypted __section(".bss..decrypted")
@@ -74,7 +73,6 @@ static inline void __init sme_encrypt_kernel(struct 
boot_params *bp) { }
 static inline void __init sme_enable(struct boot_params *bp) { }
 
 static inline void sev_es_init_vc_handling(void) { }
-static inline bool sev_es_active(void) { return false; }
 static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
 
 static inline int __init
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index a6895e440bc3..66a4ab9d95d7 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -11,7 +11,7 @@
 
 #include  /* For show_regs() */
 #include 
-#include 
+#include 
 #include 
 #include 
 #include 
@@ -615,7 +615,7 @@ int __init sev_es_efi_map_ghcbs(pgd_t *pgd)
int cpu;
u64 pfn;
 
-   if (!sev_es_active())
+   if (!prot_guest_has(PATTR_SEV_ES))
return 0;
 
pflags = _PAGE_NX | _PAGE_RW;
@@ -774,7 +774,7 @@ void __init sev_es_init_vc_handling(void)
 
BUILD_BUG_ON(offsetof(struct sev_es_runtime_data, ghcb_page) % 
PAGE_SIZE);
 
-   if (!sev_es_active())
+   if (!prot_guest_has(PATTR_SEV_ES))
return;
 
if (!sev_es_check_cpu_features())
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index eb5cae93b238..451de8e84fce 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -383,8 +383,7 @@ static bool sme_active(void)
return sme_me_mask && !sev_active();
 }
 
-/* Needs to be called from non-instrumentable code */
-bool noinstr sev_es_active(void)
+static bool sev_es_active(void)
 {
return sev_status & MSR_AMD64_SEV_ES_ENABLED;
 }
@@ -482,7 +481,7 @@ static void print_mem_encrypt_feature_info(void)
pr_cont(" SEV");
 
/* Encrypted Register State */
-   if (sev_es_active())
+   if (amd_prot_guest_has(PATTR_SEV_ES))
pr_cont(" SEV-ES");
 
pr_cont("\n");
@@ -501,7 +500,7 @@ void __init mem_encrypt_init(void)
 * With SEV, we need to unroll the rep string I/O instructions,
 * but SEV-ES supports them through the #VC handler.
 */
-   if (amd_prot_guest_has(PATTR_SEV) && !sev_es_active())
+   if (amd_prot_guest_has(PATTR_SEV) && !amd_prot_guest_has(PATTR_SEV_ES))
static_branch_enable(_enable_key);
 
print_mem_encrypt_feature_info();
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index 2109ae569c67..7711d0071f41 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -2,7 +2,6 @@
 #include 
 #include 
 #include 
-#include 
 #include 
 #include 
 
@@ -48,7 +47,7 @@ static void sme_sev_setup_real_mode(struct trampoline_header 
*th)
if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT))
th->flags |= TH_FLAGS_SME_ACTIVE;
 
-   if (sev_es_active()) {
+   if (prot_guest_has(PATTR_GUEST_PROT_STATE)) {
/*
 * Skip the call to verify_cpu() in secondary_startup_64 as it
 * will cause #VC exceptions when the AP can't handle them yet.
-- 
2.32.0



[PATCH 05/11] x86/sev: Replace occurrences of sev_active() with prot_guest_has()

2021-07-27 Thread Tom Lendacky
Replace occurrences of sev_active() with the more generic prot_guest_has()
using PATTR_GUEST_MEM_ENCRYPT, except for in arch/x86/mm/mem_encrypt*.c
where PATTR_SEV will be used. If future support is added for other memory
encryption technologies, the use of PATTR_GUEST_MEM_ENCRYPT can be
updated, as required, to use PATTR_SEV.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Cc: Ard Biesheuvel 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h |  2 --
 arch/x86/kernel/crash_dump_64.c|  4 +++-
 arch/x86/kernel/kvm.c  |  3 ++-
 arch/x86/kernel/kvmclock.c |  4 ++--
 arch/x86/kernel/machine_kexec_64.c | 16 
 arch/x86/kvm/svm/svm.c |  3 ++-
 arch/x86/mm/ioremap.c  |  6 +++---
 arch/x86/mm/mem_encrypt.c  | 15 +++
 arch/x86/platform/efi/efi_64.c |  9 +
 9 files changed, 32 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 956338406cec..7e25de37c148 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -50,7 +50,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
 void __init mem_encrypt_init(void);
 
 void __init sev_es_init_vc_handling(void);
-bool sev_active(void);
 bool sev_es_active(void);
 bool amd_prot_guest_has(unsigned int attr);
 
@@ -75,7 +74,6 @@ static inline void __init sme_encrypt_kernel(struct 
boot_params *bp) { }
 static inline void __init sme_enable(struct boot_params *bp) { }
 
 static inline void sev_es_init_vc_handling(void) { }
-static inline bool sev_active(void) { return false; }
 static inline bool sev_es_active(void) { return false; }
 static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
 
diff --git a/arch/x86/kernel/crash_dump_64.c b/arch/x86/kernel/crash_dump_64.c
index 045e82e8945b..0cfe35f03e67 100644
--- a/arch/x86/kernel/crash_dump_64.c
+++ b/arch/x86/kernel/crash_dump_64.c
@@ -10,6 +10,7 @@
 #include 
 #include 
 #include 
+#include 
 
 static ssize_t __copy_oldmem_page(unsigned long pfn, char *buf, size_t csize,
  unsigned long offset, int userbuf,
@@ -73,5 +74,6 @@ ssize_t copy_oldmem_page_encrypted(unsigned long pfn, char 
*buf, size_t csize,
 
 ssize_t elfcorehdr_read(char *buf, size_t count, u64 *ppos)
 {
-   return read_from_oldmem(buf, count, ppos, 0, sev_active());
+   return read_from_oldmem(buf, count, ppos, 0,
+   prot_guest_has(PATTR_GUEST_MEM_ENCRYPT));
 }
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index a26643dc6bd6..9d08ad2f3faa 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -27,6 +27,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -418,7 +419,7 @@ static void __init sev_map_percpu_data(void)
 {
int cpu;
 
-   if (!sev_active())
+   if (!prot_guest_has(PATTR_GUEST_MEM_ENCRYPT))
return;
 
for_each_possible_cpu(cpu) {
diff --git a/arch/x86/kernel/kvmclock.c b/arch/x86/kernel/kvmclock.c
index ad273e5861c1..f7ba78a23dcd 100644
--- a/arch/x86/kernel/kvmclock.c
+++ b/arch/x86/kernel/kvmclock.c
@@ -16,9 +16,9 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
-#include 
 #include 
 #include 
 
@@ -232,7 +232,7 @@ static void __init kvmclock_init_mem(void)
 * hvclock is shared between the guest and the hypervisor, must
 * be mapped decrypted.
 */
-   if (sev_active()) {
+   if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) {
r = set_memory_decrypted((unsigned long) hvclock_mem,
 1UL << order);
if (r) {
diff --git a/arch/x86/kernel/machine_kexec_64.c 
b/arch/x86/kernel/machine_kexec_64.c
index 8e7b517ad738..66ff788b79c9 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -167,7 +167,7 @@ static int init_transition_pgtable(struct kimage *image, 
pgd_t *pgd)
}
pte = pte_offset_kernel(pmd, vaddr);
 
-   if (sev_active())
+   if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT))
prot = PAGE_KERNEL_EXEC;
 
set_pte(pte, pfn_pte(paddr >> PAGE_SHIFT, prot));
@@ -207,7 +207,7 @@ static int init_pgtable(struct kimage *image, unsigned long 
start_pgtable)
level4p = (pgd_t *)__va(start_pgtable);
clear_page(level4p);
 
-   if (sev_active()) {
+   if (prot_guest_has(PATTR_GUEST_MEM_ENCRYPT)) {
info.page_flag   |= _PAGE_ENC;
info.kernpg_flag |= _PAGE_ENC;
}
@@ -570,12 +570,12 @@ void arch_kexec_unprotect_crashkres(void)
  */
 int arch_kexec_post_alloc_pages(void *vaddr, unsigned int pages, gfp_t gfp)
 {
-   if (sev_active())
+   if (!prot_guest_has(PATTR_HOST_MEM_ENCRYPT))
return 0;
 
/*
-* If SME 

[PATCH 04/11] x86/sme: Replace occurrences of sme_active() with prot_guest_has()

2021-07-27 Thread Tom Lendacky
Replace occurrences of sme_active() with the more generic prot_guest_has()
using PATTR_HOST_MEM_ENCRYPT, except for in arch/x86/mm/mem_encrypt*.c
where PATTR_SME will be used. If future support is added for other memory
encryption technologies, the use of PATTR_HOST_MEM_ENCRYPT can be
updated, as required, to use PATTR_SME.

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Cc: Joerg Roedel 
Cc: Will Deacon 
Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/kexec.h |  2 +-
 arch/x86/include/asm/mem_encrypt.h   |  2 --
 arch/x86/kernel/machine_kexec_64.c   |  3 ++-
 arch/x86/kernel/pci-swiotlb.c|  9 -
 arch/x86/kernel/relocate_kernel_64.S |  2 +-
 arch/x86/mm/ioremap.c|  6 +++---
 arch/x86/mm/mem_encrypt.c| 10 +-
 arch/x86/mm/mem_encrypt_identity.c   |  3 ++-
 arch/x86/realmode/init.c |  5 +++--
 drivers/iommu/amd/init.c |  7 ---
 10 files changed, 25 insertions(+), 24 deletions(-)

diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
index 0a6e34b07017..11b7c06e2828 100644
--- a/arch/x86/include/asm/kexec.h
+++ b/arch/x86/include/asm/kexec.h
@@ -129,7 +129,7 @@ relocate_kernel(unsigned long indirection_page,
unsigned long page_list,
unsigned long start_address,
unsigned int preserve_context,
-   unsigned int sme_active);
+   unsigned int host_mem_enc_active);
 #endif
 
 #define ARCH_HAS_KIMAGE_ARCH
diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index a46d47662772..956338406cec 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -50,7 +50,6 @@ void __init mem_encrypt_free_decrypted_mem(void);
 void __init mem_encrypt_init(void);
 
 void __init sev_es_init_vc_handling(void);
-bool sme_active(void);
 bool sev_active(void);
 bool sev_es_active(void);
 bool amd_prot_guest_has(unsigned int attr);
@@ -76,7 +75,6 @@ static inline void __init sme_encrypt_kernel(struct 
boot_params *bp) { }
 static inline void __init sme_enable(struct boot_params *bp) { }
 
 static inline void sev_es_init_vc_handling(void) { }
-static inline bool sme_active(void) { return false; }
 static inline bool sev_active(void) { return false; }
 static inline bool sev_es_active(void) { return false; }
 static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
diff --git a/arch/x86/kernel/machine_kexec_64.c 
b/arch/x86/kernel/machine_kexec_64.c
index 131f30fdcfbd..8e7b517ad738 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -17,6 +17,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -358,7 +359,7 @@ void machine_kexec(struct kimage *image)
   (unsigned long)page_list,
   image->start,
   image->preserve_context,
-  sme_active());
+  prot_guest_has(PATTR_HOST_MEM_ENCRYPT));
 
 #ifdef CONFIG_KEXEC_JUMP
if (image->preserve_context)
diff --git a/arch/x86/kernel/pci-swiotlb.c b/arch/x86/kernel/pci-swiotlb.c
index c2cfa5e7c152..bd9a9cfbc9a2 100644
--- a/arch/x86/kernel/pci-swiotlb.c
+++ b/arch/x86/kernel/pci-swiotlb.c
@@ -6,7 +6,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 
 #include 
 #include 
@@ -45,11 +45,10 @@ int __init pci_swiotlb_detect_4gb(void)
swiotlb = 1;
 
/*
-* If SME is active then swiotlb will be set to 1 so that bounce
-* buffers are allocated and used for devices that do not support
-* the addressing range required for the encryption mask.
+* Set swiotlb to 1 so that bounce buffers are allocated and used for
+* devices that can't support DMA to encrypted memory.
 */
-   if (sme_active())
+   if (prot_guest_has(PATTR_HOST_MEM_ENCRYPT))
swiotlb = 1;
 
return swiotlb;
diff --git a/arch/x86/kernel/relocate_kernel_64.S 
b/arch/x86/kernel/relocate_kernel_64.S
index c53271aebb64..c8fe74a28143 100644
--- a/arch/x86/kernel/relocate_kernel_64.S
+++ b/arch/x86/kernel/relocate_kernel_64.S
@@ -47,7 +47,7 @@ SYM_CODE_START_NOALIGN(relocate_kernel)
 * %rsi page_list
 * %rdx start address
 * %rcx preserve_context
-* %r8  sme_active
+* %r8  host_mem_enc_active
 */
 
/* Save the CPU context, used for jumping back */
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 60ade7dd71bd..f899f02c0241 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -14,7 +14,7 @@
 #include 
 #include 
 #include 
-#include 
+#include 
 #include 
 #include 
 
@@ -702,7 +702,7 @@ bool arch_memremap_can_ram_remap(resource_size_t phys_addr, 
unsigned long size,

[PATCH 03/11] powerpc/pseries/svm: Add a powerpc version of prot_guest_has()

2021-07-27 Thread Tom Lendacky
Introduce a powerpc version of the prot_guest_has() function. This will
be used to replace the powerpc mem_encrypt_active() implementation, so
the implementation will initially only support the PATTR_MEM_ENCRYPT
attribute.

Cc: Michael Ellerman 
Cc: Benjamin Herrenschmidt 
Cc: Paul Mackerras 
Signed-off-by: Tom Lendacky 
---
 arch/powerpc/include/asm/protected_guest.h | 30 ++
 arch/powerpc/platforms/pseries/Kconfig |  1 +
 2 files changed, 31 insertions(+)
 create mode 100644 arch/powerpc/include/asm/protected_guest.h

diff --git a/arch/powerpc/include/asm/protected_guest.h 
b/arch/powerpc/include/asm/protected_guest.h
new file mode 100644
index ..ce55c2c7e534
--- /dev/null
+++ b/arch/powerpc/include/asm/protected_guest.h
@@ -0,0 +1,30 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Protected Guest (and Host) Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#ifndef _POWERPC_PROTECTED_GUEST_H
+#define _POWERPC_PROTECTED_GUEST_H
+
+#include 
+
+#ifndef __ASSEMBLY__
+
+static inline bool prot_guest_has(unsigned int attr)
+{
+   switch (attr) {
+   case PATTR_MEM_ENCRYPT:
+   return is_secure_guest();
+
+   default:
+   return false;
+   }
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _POWERPC_PROTECTED_GUEST_H */
diff --git a/arch/powerpc/platforms/pseries/Kconfig 
b/arch/powerpc/platforms/pseries/Kconfig
index 5e037df2a3a1..8ce5417d6feb 100644
--- a/arch/powerpc/platforms/pseries/Kconfig
+++ b/arch/powerpc/platforms/pseries/Kconfig
@@ -159,6 +159,7 @@ config PPC_SVM
select SWIOTLB
select ARCH_HAS_MEM_ENCRYPT
select ARCH_HAS_FORCE_DMA_UNENCRYPTED
+   select ARCH_HAS_PROTECTED_GUEST
help
 There are certain POWER platforms which support secure guests using
 the Protected Execution Facility, with the help of an Ultravisor
-- 
2.32.0



[PATCH 02/11] x86/sev: Add an x86 version of prot_guest_has()

2021-07-27 Thread Tom Lendacky
Introduce an x86 version of the prot_guest_has() function. This will be
used in the more generic x86 code to replace vendor specific calls like
sev_active(), etc.

While the name suggests this is intended mainly for guests, it will
also be used for host memory encryption checks in place of sme_active().

The amd_prot_guest_has() function does not use EXPORT_SYMBOL_GPL for the
same reasons previously stated when changing sme_active(), sev_active and
sme_me_mask to EXPORT_SYBMOL:
  commit 87df26175e67 ("x86/mm: Unbreak modules that rely on external 
PAGE_KERNEL availability")
  commit 9d5f38ba6c82 ("x86/mm: Unbreak modules that use the DMA API")

Cc: Thomas Gleixner 
Cc: Ingo Molnar 
Cc: Borislav Petkov 
Cc: Dave Hansen 
Cc: Andy Lutomirski 
Cc: Peter Zijlstra 
Co-developed-by: Andi Kleen 
Signed-off-by: Andi Kleen 
Co-developed-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Tom Lendacky 
---
 arch/x86/Kconfig   |  1 +
 arch/x86/include/asm/mem_encrypt.h |  2 ++
 arch/x86/include/asm/protected_guest.h | 27 ++
 arch/x86/mm/mem_encrypt.c  | 25 
 include/linux/protected_guest.h|  5 +
 5 files changed, 60 insertions(+)
 create mode 100644 arch/x86/include/asm/protected_guest.h

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 49270655e827..e47213cbfc55 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1514,6 +1514,7 @@ config AMD_MEM_ENCRYPT
select ARCH_HAS_FORCE_DMA_UNENCRYPTED
select INSTRUCTION_DECODER
select ARCH_HAS_RESTRICTED_VIRTIO_MEMORY_ACCESS
+   select ARCH_HAS_PROTECTED_GUEST
help
  Say yes to enable support for the encryption of system memory.
  This requires an AMD processor that supports Secure Memory
diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index 9c80c68d75b5..a46d47662772 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -53,6 +53,7 @@ void __init sev_es_init_vc_handling(void);
 bool sme_active(void);
 bool sev_active(void);
 bool sev_es_active(void);
+bool amd_prot_guest_has(unsigned int attr);
 
 #define __bss_decrypted __section(".bss..decrypted")
 
@@ -78,6 +79,7 @@ static inline void sev_es_init_vc_handling(void) { }
 static inline bool sme_active(void) { return false; }
 static inline bool sev_active(void) { return false; }
 static inline bool sev_es_active(void) { return false; }
+static inline bool amd_prot_guest_has(unsigned int attr) { return false; }
 
 static inline int __init
 early_set_memory_decrypted(unsigned long vaddr, unsigned long size) { return 
0; }
diff --git a/arch/x86/include/asm/protected_guest.h 
b/arch/x86/include/asm/protected_guest.h
new file mode 100644
index ..b4a267dddf93
--- /dev/null
+++ b/arch/x86/include/asm/protected_guest.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Protected Guest (and Host) Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#ifndef _X86_PROTECTED_GUEST_H
+#define _X86_PROTECTED_GUEST_H
+
+#include 
+
+#ifndef __ASSEMBLY__
+
+static inline bool prot_guest_has(unsigned int attr)
+{
+   if (sme_me_mask)
+   return amd_prot_guest_has(attr);
+
+   return false;
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _X86_PROTECTED_GUEST_H */
diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index ff08dc463634..7d3b2c6f5f88 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -389,6 +390,30 @@ bool noinstr sev_es_active(void)
return sev_status & MSR_AMD64_SEV_ES_ENABLED;
 }
 
+bool amd_prot_guest_has(unsigned int attr)
+{
+   switch (attr) {
+   case PATTR_MEM_ENCRYPT:
+   return sme_me_mask != 0;
+
+   case PATTR_SME:
+   case PATTR_HOST_MEM_ENCRYPT:
+   return sme_active();
+
+   case PATTR_SEV:
+   case PATTR_GUEST_MEM_ENCRYPT:
+   return sev_active();
+
+   case PATTR_SEV_ES:
+   case PATTR_GUEST_PROT_STATE:
+   return sev_es_active();
+
+   default:
+   return false;
+   }
+}
+EXPORT_SYMBOL(amd_prot_guest_has);
+
 /* Override for DMA direct allocation check - ARCH_HAS_FORCE_DMA_UNENCRYPTED */
 bool force_dma_unencrypted(struct device *dev)
 {
diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h
index f8ed7b72967b..7a7120abbb62 100644
--- a/include/linux/protected_guest.h
+++ b/include/linux/protected_guest.h
@@ -17,6 +17,11 @@
 #define PATTR_GUEST_MEM_ENCRYPT2   /* Guest encrypted 
memory */
 #define PATTR_GUEST_PROT_STATE 3   /* Guest encrypted state */
 
+/* 0x800 - 0x8ff reserved for AMD */
+#define PATTR_SME

[PATCH 01/11] mm: Introduce a function to check for virtualization protection features

2021-07-27 Thread Tom Lendacky
In prep for other protected virtualization technologies, introduce a
generic helper function, prot_guest_has(), that can be used to check
for specific protection attributes, like memory encryption. This is
intended to eliminate having to add multiple technology-specific checks
to the code (e.g. if (sev_active() || tdx_active())).

Co-developed-by: Andi Kleen 
Signed-off-by: Andi Kleen 
Co-developed-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Kuppuswamy Sathyanarayanan 

Signed-off-by: Tom Lendacky 
---
 arch/Kconfig|  3 +++
 include/linux/protected_guest.h | 32 
 2 files changed, 35 insertions(+)
 create mode 100644 include/linux/protected_guest.h

diff --git a/arch/Kconfig b/arch/Kconfig
index 129df498a8e1..a47cf283f2ff 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -1231,6 +1231,9 @@ config RELR
 config ARCH_HAS_MEM_ENCRYPT
bool
 
+config ARCH_HAS_PROTECTED_GUEST
+   bool
+
 config HAVE_SPARSE_SYSCALL_NR
bool
help
diff --git a/include/linux/protected_guest.h b/include/linux/protected_guest.h
new file mode 100644
index ..f8ed7b72967b
--- /dev/null
+++ b/include/linux/protected_guest.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Protected Guest (and Host) Capability checks
+ *
+ * Copyright (C) 2021 Advanced Micro Devices, Inc.
+ *
+ * Author: Tom Lendacky 
+ */
+
+#ifndef _PROTECTED_GUEST_H
+#define _PROTECTED_GUEST_H
+
+#ifndef __ASSEMBLY__
+
+#define PATTR_MEM_ENCRYPT  0   /* Encrypted memory */
+#define PATTR_HOST_MEM_ENCRYPT 1   /* Host encrypted memory */
+#define PATTR_GUEST_MEM_ENCRYPT2   /* Guest encrypted 
memory */
+#define PATTR_GUEST_PROT_STATE 3   /* Guest encrypted state */
+
+#ifdef CONFIG_ARCH_HAS_PROTECTED_GUEST
+
+#include 
+
+#else  /* !CONFIG_ARCH_HAS_PROTECTED_GUEST */
+
+static inline bool prot_guest_has(unsigned int attr) { return false; }
+
+#endif /* CONFIG_ARCH_HAS_PROTECTED_GUEST */
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _PROTECTED_GUEST_H */
-- 
2.32.0



[PATCH 00/11] Implement generic prot_guest_has() helper function

2021-07-27 Thread Tom Lendacky
This patch series provides a generic helper function, prot_guest_has(),
to replace the sme_active(), sev_active(), sev_es_active() and
mem_encrypt_active() functions.

It is expected that as new protected virtualization technologies are
added to the kernel, they can all be covered by a single function call
instead of a collection of specific function calls all called from the
same locations.

The powerpc and s390 patches have been compile tested only. Can the
folks copied on this series verify that nothing breaks for them.

Cc: Andi Kleen 
Cc: Andy Lutomirski 
Cc: Ard Biesheuvel 
Cc: Baoquan He 
Cc: Benjamin Herrenschmidt 
Cc: Borislav Petkov 
Cc: Christian Borntraeger 
Cc: Daniel Vetter 
Cc: Dave Hansen 
Cc: Dave Young 
Cc: David Airlie 
Cc: Heiko Carstens 
Cc: Ingo Molnar 
Cc: Joerg Roedel 
Cc: Maarten Lankhorst 
Cc: Maxime Ripard 
Cc: Michael Ellerman 
Cc: Paul Mackerras 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Cc: Thomas Zimmermann 
Cc: Vasily Gorbik 
Cc: VMware Graphics 
Cc: Will Deacon 

---

Patches based on:
  https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git master
  commit 79e920060fa7 ("Merge branch 'WIP/fixes'")

Tom Lendacky (11):
  mm: Introduce a function to check for virtualization protection
features
  x86/sev: Add an x86 version of prot_guest_has()
  powerpc/pseries/svm: Add a powerpc version of prot_guest_has()
  x86/sme: Replace occurrences of sme_active() with prot_guest_has()
  x86/sev: Replace occurrences of sev_active() with prot_guest_has()
  x86/sev: Replace occurrences of sev_es_active() with prot_guest_has()
  treewide: Replace the use of mem_encrypt_active() with
prot_guest_has()
  mm: Remove the now unused mem_encrypt_active() function
  x86/sev: Remove the now unused mem_encrypt_active() function
  powerpc/pseries/svm: Remove the now unused mem_encrypt_active()
function
  s390/mm: Remove the now unused mem_encrypt_active() function

 arch/Kconfig   |  3 ++
 arch/powerpc/include/asm/mem_encrypt.h |  5 --
 arch/powerpc/include/asm/protected_guest.h | 30 +++
 arch/powerpc/platforms/pseries/Kconfig |  1 +
 arch/s390/include/asm/mem_encrypt.h|  2 -
 arch/x86/Kconfig   |  1 +
 arch/x86/include/asm/kexec.h   |  2 +-
 arch/x86/include/asm/mem_encrypt.h | 13 +
 arch/x86/include/asm/protected_guest.h | 27 ++
 arch/x86/kernel/crash_dump_64.c|  4 +-
 arch/x86/kernel/head64.c   |  4 +-
 arch/x86/kernel/kvm.c  |  3 +-
 arch/x86/kernel/kvmclock.c |  4 +-
 arch/x86/kernel/machine_kexec_64.c | 19 +++
 arch/x86/kernel/pci-swiotlb.c  |  9 ++--
 arch/x86/kernel/relocate_kernel_64.S   |  2 +-
 arch/x86/kernel/sev.c  |  6 +--
 arch/x86/kvm/svm/svm.c |  3 +-
 arch/x86/mm/ioremap.c  | 16 +++---
 arch/x86/mm/mem_encrypt.c  | 60 +++---
 arch/x86/mm/mem_encrypt_identity.c |  3 +-
 arch/x86/mm/pat/set_memory.c   |  3 +-
 arch/x86/platform/efi/efi_64.c |  9 ++--
 arch/x86/realmode/init.c   |  8 +--
 drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c|  4 +-
 drivers/gpu/drm/drm_cache.c|  4 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_drv.c|  4 +-
 drivers/gpu/drm/vmwgfx/vmwgfx_msg.c|  6 +--
 drivers/iommu/amd/init.c   |  7 +--
 drivers/iommu/amd/iommu.c  |  3 +-
 drivers/iommu/amd/iommu_v2.c   |  3 +-
 drivers/iommu/iommu.c  |  3 +-
 fs/proc/vmcore.c   |  6 +--
 include/linux/mem_encrypt.h|  4 --
 include/linux/protected_guest.h| 37 +
 kernel/dma/swiotlb.c   |  4 +-
 36 files changed, 218 insertions(+), 104 deletions(-)
 create mode 100644 arch/powerpc/include/asm/protected_guest.h
 create mode 100644 arch/x86/include/asm/protected_guest.h
 create mode 100644 include/linux/protected_guest.h

-- 
2.32.0



Re: [PATCH v13 01/12] swiotlb: Refactor swiotlb init functions

2021-06-18 Thread Tom Lendacky
On 6/18/21 1:25 AM, Claire Chang wrote:
> On Fri, Jun 18, 2021 at 7:30 AM Stefano Stabellini
>  wrote:
>>
>> On Thu, 17 Jun 2021, Claire Chang wrote:
>>> Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
>>> initialization to make the code reusable.
>>>
>>> Signed-off-by: Claire Chang 
>>> Reviewed-by: Christoph Hellwig 
>>> Tested-by: Stefano Stabellini 
>>> Tested-by: Will Deacon 
>>> ---
>>>  kernel/dma/swiotlb.c | 50 ++--
>>>  1 file changed, 25 insertions(+), 25 deletions(-)
>>>
>>> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
>>> index 52e2ac526757..47bb2a766798 100644
>>> --- a/kernel/dma/swiotlb.c
>>> +++ b/kernel/dma/swiotlb.c
>>> @@ -168,9 +168,28 @@ void __init swiotlb_update_mem_attributes(void)
>>>   memset(vaddr, 0, bytes);
>>>  }
>>>
>>> -int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int 
>>> verbose)
>>> +static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t 
>>> start,
>>> + unsigned long nslabs, bool late_alloc)
>>>  {
>>> + void *vaddr = phys_to_virt(start);
>>>   unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
>>> +
>>> + mem->nslabs = nslabs;
>>> + mem->start = start;
>>> + mem->end = mem->start + bytes;
>>> + mem->index = 0;
>>> + mem->late_alloc = late_alloc;
>>> + spin_lock_init(>lock);
>>> + for (i = 0; i < mem->nslabs; i++) {
>>> + mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
>>> + mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
>>> + mem->slots[i].alloc_size = 0;
>>> + }
>>> + memset(vaddr, 0, bytes);
>>> +}
>>> +
>>> +int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int 
>>> verbose)
>>> +{
>>>   struct io_tlb_mem *mem;
>>>   size_t alloc_size;
>>>
>>> @@ -186,16 +205,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned 
>>> long nslabs, int verbose)
>>>   if (!mem)
>>>   panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
>>> __func__, alloc_size, PAGE_SIZE);
>>> - mem->nslabs = nslabs;
>>> - mem->start = __pa(tlb);
>>> - mem->end = mem->start + bytes;
>>> - mem->index = 0;
>>> - spin_lock_init(>lock);
>>> - for (i = 0; i < mem->nslabs; i++) {
>>> - mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
>>> - mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
>>> - mem->slots[i].alloc_size = 0;
>>> - }
>>> +
>>> + swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
>>>
>>>   io_tlb_default_mem = mem;
>>>   if (verbose)
>>> @@ -282,8 +293,8 @@ swiotlb_late_init_with_default_size(size_t default_size)
>>>  int
>>>  swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
>>>  {
>>> - unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
>>>   struct io_tlb_mem *mem;
>>> + unsigned long bytes = nslabs << IO_TLB_SHIFT;
>>>
>>>   if (swiotlb_force == SWIOTLB_NO_FORCE)
>>>   return 0;
>>> @@ -297,20 +308,9 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long 
>>> nslabs)
>>>   if (!mem)
>>>   return -ENOMEM;
>>>
>>> - mem->nslabs = nslabs;
>>> - mem->start = virt_to_phys(tlb);
>>> - mem->end = mem->start + bytes;
>>> - mem->index = 0;
>>> - mem->late_alloc = 1;
>>> - spin_lock_init(>lock);
>>> - for (i = 0; i < mem->nslabs; i++) {
>>> - mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
>>> - mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
>>> - mem->slots[i].alloc_size = 0;
>>> - }
>>> -
>>> + memset(mem, 0, sizeof(*mem));
>>> + swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
>>>   set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
>>> - memset(tlb, 0, bytes);
>>
>> This is good for swiotlb_late_init_with_tbl. However I have just noticed
>> that mem could also be allocated from swiotlb_init_with_tbl, in which
>> case the zeroing is missing. I think we need another memset in
>> swiotlb_init_with_tbl as well. Or maybe it could be better to have a
>> single memset at the beginning of swiotlb_init_io_tlb_mem instead. Up to
>> you.
> 
> swiotlb_init_with_tbl uses memblock_alloc to allocate the io_tlb_mem
> and memblock_alloc[1] will do memset in memblock_alloc_try_nid[2], so
> swiotlb_init_with_tbl is also good.
> I'm happy to add the memset in swiotlb_init_io_tlb_mem if you think
> it's clearer and safer.

On x86, if the memset is done before set_memory_decrypted() and memory
encryption is active, then the memory will look like ciphertext afterwards
and not be zeroes. If zeroed memory is required, then a memset must be
done after the set_memory_decrypted() calls.

Thanks,
Tom

> 
> [1] 
> 

Re: [PATCH v7 01/15] swiotlb: Refactor swiotlb init functions

2021-05-27 Thread Tom Lendacky
On 5/27/21 9:41 AM, Tom Lendacky wrote:
> On 5/27/21 8:02 AM, Christoph Hellwig wrote:
>> On Wed, May 19, 2021 at 11:50:07AM -0700, Florian Fainelli wrote:
>>> You convert this call site with swiotlb_init_io_tlb_mem() which did not
>>> do the set_memory_decrypted()+memset(). Is this okay or should
>>> swiotlb_init_io_tlb_mem() add an additional argument to do this
>>> conditionally?
>>
>> The zeroing is useful and was missing before.  I think having a clean
>> state here is the right thing.
>>
>> Not sure about the set_memory_decrypted, swiotlb_update_mem_attributes
>> kinda suggests it is too early to set the memory decrupted.
>>
>> Adding Tom who should now about all this.
> 
> The reason for adding swiotlb_update_mem_attributes() was because having
> the call to set_memory_decrypted() in swiotlb_init_with_tbl() triggered a
> BUG_ON() related to interrupts not being enabled yet during boot. So that
> call had to be delayed until interrupts were enabled.

I pulled down and tested the patch set and booted with SME enabled. The
following was seen during the boot:

[0.134184] BUG: Bad page state in process swapper  pfn:108002
[0.134196] page:(ptrval) refcount:0 mapcount:-128 
mapping: index:0x0 pfn:0x108002
[0.134201] flags: 0x17c000(node=0|zone=2|lastcpupid=0x1f)
[0.134208] raw: 0017c000 88847f355e28 88847f355e28 

[0.134210] raw:  0001 ff7f 

[0.134212] page dumped because: nonzero mapcount
[0.134213] Modules linked in:
[0.134218] CPU: 0 PID: 0 Comm: swapper Not tainted 5.13.0-rc2-sos-custom #3
[0.134221] Hardware name: ...
[0.134224] Call Trace:
[0.134233]  dump_stack+0x76/0x94
[0.134244]  bad_page+0xa6/0xf0
[0.134252]  __free_pages_ok+0x331/0x360
[0.134256]  memblock_free_all+0x158/0x1c1
[0.134267]  mem_init+0x1f/0x14c
[0.134273]  start_kernel+0x290/0x574
[0.134279]  secondary_startup_64_no_verify+0xb0/0xbb

I see this about 40 times during the boot, each with a different PFN. The
system boots (which seemed odd), but I don't know if there will be side
effects to this (I didn't stress the system).

I modified the code to add a flag to not do the set_memory_decrypted(), as
suggested by Florian, when invoked from swiotlb_init_with_tbl(), and that
eliminated the bad page state BUG.

Thanks,
Tom

> 
> Thanks,
> Tom
> 
>>


Re: [PATCH v7 01/15] swiotlb: Refactor swiotlb init functions

2021-05-27 Thread Tom Lendacky
On 5/27/21 8:02 AM, Christoph Hellwig wrote:
> On Wed, May 19, 2021 at 11:50:07AM -0700, Florian Fainelli wrote:
>> You convert this call site with swiotlb_init_io_tlb_mem() which did not
>> do the set_memory_decrypted()+memset(). Is this okay or should
>> swiotlb_init_io_tlb_mem() add an additional argument to do this
>> conditionally?
> 
> The zeroing is useful and was missing before.  I think having a clean
> state here is the right thing.
> 
> Not sure about the set_memory_decrypted, swiotlb_update_mem_attributes
> kinda suggests it is too early to set the memory decrupted.
> 
> Adding Tom who should now about all this.

The reason for adding swiotlb_update_mem_attributes() was because having
the call to set_memory_decrypted() in swiotlb_init_with_tbl() triggered a
BUG_ON() related to interrupts not being enabled yet during boot. So that
call had to be delayed until interrupts were enabled.

Thanks,
Tom

>