Re: [Xen-devel] [PATCH v7 08/36] x86/mm: Add support to enable SME in early boot processing

2017-06-21 Thread Thomas Gleixner
On Wed, 21 Jun 2017, Tom Lendacky wrote:
> On 6/21/2017 10:38 AM, Thomas Gleixner wrote:
> > /*
> >  * Sanitize CPU configuration and retrieve the modifier
> >  * for the initial pgdir entry which will be programmed
> >  * into CR3. Depends on enabled SME encryption, normally 0.
> >  */
> > call __startup_secondary_64
> > 
> >  addq$(init_top_pgt - __START_KERNEL_map), %rax
> > 
> > You can hide that stuff in C-code nicely without adding any cruft to the
> > ASM code.
> > 
> 
> Moving the call to verify_cpu into the C-code might be quite a bit of
> change.  Currently, the verify_cpu code is included code and not a
> global function.

Ah. Ok. I missed that.

> I can still do the __startup_secondary_64() function and then look to
> incorporate verify_cpu into both __startup_64() and
> __startup_secondary_64() as a post-patch to this series.

Yes, just having __startup_secondary_64() for now and there the extra bits
for that encryption stuff is fine.

> At least the secondary path will have a base C routine to which
> modifications can be made in the future if needed.  How does that sound?

Sounds like a plan.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 08/36] x86/mm: Add support to enable SME in early boot processing

2017-06-21 Thread Tom Lendacky

On 6/21/2017 10:38 AM, Thomas Gleixner wrote:

On Wed, 21 Jun 2017, Tom Lendacky wrote:

On 6/21/2017 2:16 AM, Thomas Gleixner wrote:

Why is this an unconditional function? Isn't the mask simply 0 when the MEM
ENCRYPT support is disabled?


I made it unconditional because of the call from head_64.S. I can't make
use of the C level static inline function and since the mask is not a
variable if CONFIG_AMD_MEM_ENCRYPT is not configured (#defined to 0) I
can't reference the variable directly.

I could create a #define in head_64.S that changes this to load rax with
the variable if CONFIG_AMD_MEM_ENCRYPT is configured or a zero if it's
not or add a #ifdef at that point in the code directly. Thoughts on
that?


See below.


That does not make any sense. Neither the call to sme_encrypt_kernel() nor
the following call to sme_get_me_mask().

__startup_64() is already C code, so why can't you simply call that from
__startup_64() in C and return the mask from there?


I was trying to keep it explicit as to what was happening, but I can
move those calls into __startup_64().


That's much preferred. And the return value wants to be documented in both
C and ASM code.


Will do.




I'll still need the call to sme_get_me_mask() in the secondary_startup_64
path, though (depending on your thoughts to the above response).


 call verify_cpu

 movq$(init_top_pgt - __START_KERNEL_map), %rax

So if you make that:

/*
 * Sanitize CPU configuration and retrieve the modifier
 * for the initial pgdir entry which will be programmed
 * into CR3. Depends on enabled SME encryption, normally 0.
 */
call __startup_secondary_64

 addq$(init_top_pgt - __START_KERNEL_map), %rax

You can hide that stuff in C-code nicely without adding any cruft to the
ASM code.



Moving the call to verify_cpu into the C-code might be quite a bit of
change.  Currently, the verify_cpu code is included code and not a
global function.  I can still do the __startup_secondary_64() function
and then look to incorporate verify_cpu into both __startup_64() and
__startup_secondary_64() as a post-patch to this series. At least the
secondary path will have a base C routine to which modifications can
be made in the future if needed.  How does that sound?

Thanks,
Tom


Thanks,

tglx



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 08/36] x86/mm: Add support to enable SME in early boot processing

2017-06-21 Thread Thomas Gleixner
On Wed, 21 Jun 2017, Tom Lendacky wrote:
> On 6/21/2017 2:16 AM, Thomas Gleixner wrote:
> > Why is this an unconditional function? Isn't the mask simply 0 when the MEM
> > ENCRYPT support is disabled?
> 
> I made it unconditional because of the call from head_64.S. I can't make
> use of the C level static inline function and since the mask is not a
> variable if CONFIG_AMD_MEM_ENCRYPT is not configured (#defined to 0) I
> can't reference the variable directly.
> 
> I could create a #define in head_64.S that changes this to load rax with
> the variable if CONFIG_AMD_MEM_ENCRYPT is configured or a zero if it's
> not or add a #ifdef at that point in the code directly. Thoughts on
> that?

See below.

> > That does not make any sense. Neither the call to sme_encrypt_kernel() nor
> > the following call to sme_get_me_mask().
> > 
> > __startup_64() is already C code, so why can't you simply call that from
> > __startup_64() in C and return the mask from there?
> 
> I was trying to keep it explicit as to what was happening, but I can
> move those calls into __startup_64().

That's much preferred. And the return value wants to be documented in both
C and ASM code.

> I'll still need the call to sme_get_me_mask() in the secondary_startup_64
> path, though (depending on your thoughts to the above response).

call verify_cpu

movq$(init_top_pgt - __START_KERNEL_map), %rax

So if you make that:

/*
 * Sanitize CPU configuration and retrieve the modifier
 * for the initial pgdir entry which will be programmed
 * into CR3. Depends on enabled SME encryption, normally 0.
 */
call __startup_secondary_64

addq$(init_top_pgt - __START_KERNEL_map), %rax

You can hide that stuff in C-code nicely without adding any cruft to the
ASM code.

Thanks,

tglx

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 08/36] x86/mm: Add support to enable SME in early boot processing

2017-06-21 Thread Tom Lendacky

On 6/21/2017 2:16 AM, Thomas Gleixner wrote:

On Fri, 16 Jun 2017, Tom Lendacky wrote:

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index a105796..988b336 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -15,16 +15,24 @@
  
  #ifndef __ASSEMBLY__
  
+#include 

+
  #ifdef CONFIG_AMD_MEM_ENCRYPT
  
  extern unsigned long sme_me_mask;
  
+void __init sme_enable(void);

+
  #else /* !CONFIG_AMD_MEM_ENCRYPT */
  
  #define sme_me_mask	0UL
  
+static inline void __init sme_enable(void) { }

+
  #endif/* CONFIG_AMD_MEM_ENCRYPT */
  
+unsigned long sme_get_me_mask(void);


Why is this an unconditional function? Isn't the mask simply 0 when the MEM
ENCRYPT support is disabled?


I made it unconditional because of the call from head_64.S. I can't make
use of the C level static inline function and since the mask is not a
variable if CONFIG_AMD_MEM_ENCRYPT is not configured (#defined to 0) I
can't reference the variable directly.

I could create a #define in head_64.S that changes this to load rax with
the variable if CONFIG_AMD_MEM_ENCRYPT is configured or a zero if it's
not or add a #ifdef at that point in the code directly. Thoughts on
that?




diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 6225550..ef12729 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -78,7 +78,29 @@ startup_64:
call__startup_64
popq%rsi
  
-	movq	$(early_top_pgt - __START_KERNEL_map), %rax

+   /*
+* Encrypt the kernel if SME is active.
+* The real_mode_data address is in %rsi and that register can be
+* clobbered by the called function so be sure to save it.
+*/
+   push%rsi
+   callsme_encrypt_kernel
+   pop %rsi


That does not make any sense. Neither the call to sme_encrypt_kernel() nor
the following call to sme_get_me_mask().

__startup_64() is already C code, so why can't you simply call that from
__startup_64() in C and return the mask from there?


I was trying to keep it explicit as to what was happening, but I can
move those calls into __startup_64(). I'll still need the call to
sme_get_me_mask() in the secondary_startup_64 path, though (depending on
your thoughts to the above response).




@@ -98,7 +120,20 @@ ENTRY(secondary_startup_64)
/* Sanitize CPU configuration */
call verify_cpu
  
-	movq	$(init_top_pgt - __START_KERNEL_map), %rax

+   /*
+* Get the SME encryption mask.
+*  The encryption mask will be returned in %rax so we do an ADD
+*  below to be sure that the encryption mask is part of the
+*  value that will stored in %cr3.
+*
+* The real_mode_data address is in %rsi and that register can be
+* clobbered by the called function so be sure to save it.
+*/
+   push%rsi
+   callsme_get_me_mask
+   pop %rsi


Do we really need a call here? The mask is established at this point, so
it's either 0 when the encryption stuff is not compiled in or it can be
retrieved from a variable which is accessible at this point.



Same as above, this can be updated based on the decided approach.

Thanks,
Tom


+
+   addq$(init_top_pgt - __START_KERNEL_map), %rax
  1:
  
  	/* Enable PAE mode, PGE and LA57 */


Thanks,

tglx



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 08/36] x86/mm: Add support to enable SME in early boot processing

2017-06-21 Thread Thomas Gleixner
On Fri, 16 Jun 2017, Tom Lendacky wrote:
> diff --git a/arch/x86/include/asm/mem_encrypt.h 
> b/arch/x86/include/asm/mem_encrypt.h
> index a105796..988b336 100644
> --- a/arch/x86/include/asm/mem_encrypt.h
> +++ b/arch/x86/include/asm/mem_encrypt.h
> @@ -15,16 +15,24 @@
>  
>  #ifndef __ASSEMBLY__
>  
> +#include 
> +
>  #ifdef CONFIG_AMD_MEM_ENCRYPT
>  
>  extern unsigned long sme_me_mask;
>  
> +void __init sme_enable(void);
> +
>  #else/* !CONFIG_AMD_MEM_ENCRYPT */
>  
>  #define sme_me_mask  0UL
>  
> +static inline void __init sme_enable(void) { }
> +
>  #endif   /* CONFIG_AMD_MEM_ENCRYPT */
>  
> +unsigned long sme_get_me_mask(void);

Why is this an unconditional function? Isn't the mask simply 0 when the MEM
ENCRYPT support is disabled?

> diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
> index 6225550..ef12729 100644
> --- a/arch/x86/kernel/head_64.S
> +++ b/arch/x86/kernel/head_64.S
> @@ -78,7 +78,29 @@ startup_64:
>   call__startup_64
>   popq%rsi
>  
> - movq$(early_top_pgt - __START_KERNEL_map), %rax
> + /*
> +  * Encrypt the kernel if SME is active.
> +  * The real_mode_data address is in %rsi and that register can be
> +  * clobbered by the called function so be sure to save it.
> +  */
> + push%rsi
> + callsme_encrypt_kernel
> + pop %rsi

That does not make any sense. Neither the call to sme_encrypt_kernel() nor
the following call to sme_get_me_mask().

__startup_64() is already C code, so why can't you simply call that from
__startup_64() in C and return the mask from there?

> @@ -98,7 +120,20 @@ ENTRY(secondary_startup_64)
>   /* Sanitize CPU configuration */
>   call verify_cpu
>  
> - movq$(init_top_pgt - __START_KERNEL_map), %rax
> + /*
> +  * Get the SME encryption mask.
> +  *  The encryption mask will be returned in %rax so we do an ADD
> +  *  below to be sure that the encryption mask is part of the
> +  *  value that will stored in %cr3.
> +  *
> +  * The real_mode_data address is in %rsi and that register can be
> +  * clobbered by the called function so be sure to save it.
> +  */
> + push%rsi
> + callsme_get_me_mask
> + pop %rsi

Do we really need a call here? The mask is established at this point, so
it's either 0 when the encryption stuff is not compiled in or it can be
retrieved from a variable which is accessible at this point.

> +
> + addq$(init_top_pgt - __START_KERNEL_map), %rax
>  1:
>  
>   /* Enable PAE mode, PGE and LA57 */

Thanks,

tglx

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 08/36] x86/mm: Add support to enable SME in early boot processing

2017-06-20 Thread Tom Lendacky

On 6/20/2017 2:38 AM, Borislav Petkov wrote:

On Fri, Jun 16, 2017 at 01:51:15PM -0500, Tom Lendacky wrote:

Add support to the early boot code to use Secure Memory Encryption (SME).
Since the kernel has been loaded into memory in a decrypted state, encrypt
the kernel in place and update the early pagetables with the memory
encryption mask so that new pagetable entries will use memory encryption.

The routines to set the encryption mask and perform the encryption are
stub routines for now with functionality to be added in a later patch.

Because of the need to have the routines available to head_64.S, the
mem_encrypt.c is always built and #ifdefs in mem_encrypt.c will provide
functionality or stub routines depending on CONFIG_AMD_MEM_ENCRYPT.

Signed-off-by: Tom Lendacky 
---
  arch/x86/include/asm/mem_encrypt.h |8 +++
  arch/x86/kernel/head64.c   |   33 +-
  arch/x86/kernel/head_64.S  |   39 ++--
  arch/x86/mm/Makefile   |4 +---
  arch/x86/mm/mem_encrypt.c  |   24 ++
  5 files changed, 93 insertions(+), 15 deletions(-)


...


diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
index b99d469..9a78277 100644
--- a/arch/x86/mm/mem_encrypt.c
+++ b/arch/x86/mm/mem_encrypt.c
@@ -11,6 +11,9 @@
   */
  
  #include 

+#include 
+
+#ifdef CONFIG_AMD_MEM_ENCRYPT
  
  /*

   * Since SME related variables are set early in the boot process they must
@@ -19,3 +22,24 @@
   */
  unsigned long sme_me_mask __section(.data) = 0;
  EXPORT_SYMBOL_GPL(sme_me_mask);
+
+void __init sme_encrypt_kernel(void)
+{
+}


Just the minor:

void __init sme_encrypt_kernel(void) { }

in case you have to respin.


I have to re-spin for the kbuild test error.  But given that this
function will be filled in later it's probably not worth doing the
space savings here.

Thanks,
Tom



Reviewed-by: Borislav Petkov 



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v7 08/36] x86/mm: Add support to enable SME in early boot processing

2017-06-20 Thread Borislav Petkov
On Fri, Jun 16, 2017 at 01:51:15PM -0500, Tom Lendacky wrote:
> Add support to the early boot code to use Secure Memory Encryption (SME).
> Since the kernel has been loaded into memory in a decrypted state, encrypt
> the kernel in place and update the early pagetables with the memory
> encryption mask so that new pagetable entries will use memory encryption.
> 
> The routines to set the encryption mask and perform the encryption are
> stub routines for now with functionality to be added in a later patch.
> 
> Because of the need to have the routines available to head_64.S, the
> mem_encrypt.c is always built and #ifdefs in mem_encrypt.c will provide
> functionality or stub routines depending on CONFIG_AMD_MEM_ENCRYPT.
> 
> Signed-off-by: Tom Lendacky 
> ---
>  arch/x86/include/asm/mem_encrypt.h |8 +++
>  arch/x86/kernel/head64.c   |   33 +-
>  arch/x86/kernel/head_64.S  |   39 
> ++--
>  arch/x86/mm/Makefile   |4 +---
>  arch/x86/mm/mem_encrypt.c  |   24 ++
>  5 files changed, 93 insertions(+), 15 deletions(-)

...

> diff --git a/arch/x86/mm/mem_encrypt.c b/arch/x86/mm/mem_encrypt.c
> index b99d469..9a78277 100644
> --- a/arch/x86/mm/mem_encrypt.c
> +++ b/arch/x86/mm/mem_encrypt.c
> @@ -11,6 +11,9 @@
>   */
>  
>  #include 
> +#include 
> +
> +#ifdef CONFIG_AMD_MEM_ENCRYPT
>  
>  /*
>   * Since SME related variables are set early in the boot process they must
> @@ -19,3 +22,24 @@
>   */
>  unsigned long sme_me_mask __section(.data) = 0;
>  EXPORT_SYMBOL_GPL(sme_me_mask);
> +
> +void __init sme_encrypt_kernel(void)
> +{
> +}

Just the minor:

void __init sme_encrypt_kernel(void) { }

in case you have to respin.

Reviewed-by: Borislav Petkov 

-- 
Regards/Gruss,
Boris.

Good mailing practices for 400: avoid top-posting and trim the reply.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH v7 08/36] x86/mm: Add support to enable SME in early boot processing

2017-06-16 Thread Tom Lendacky
Add support to the early boot code to use Secure Memory Encryption (SME).
Since the kernel has been loaded into memory in a decrypted state, encrypt
the kernel in place and update the early pagetables with the memory
encryption mask so that new pagetable entries will use memory encryption.

The routines to set the encryption mask and perform the encryption are
stub routines for now with functionality to be added in a later patch.

Because of the need to have the routines available to head_64.S, the
mem_encrypt.c is always built and #ifdefs in mem_encrypt.c will provide
functionality or stub routines depending on CONFIG_AMD_MEM_ENCRYPT.

Signed-off-by: Tom Lendacky 
---
 arch/x86/include/asm/mem_encrypt.h |8 +++
 arch/x86/kernel/head64.c   |   33 +-
 arch/x86/kernel/head_64.S  |   39 ++--
 arch/x86/mm/Makefile   |4 +---
 arch/x86/mm/mem_encrypt.c  |   24 ++
 5 files changed, 93 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/mem_encrypt.h 
b/arch/x86/include/asm/mem_encrypt.h
index a105796..988b336 100644
--- a/arch/x86/include/asm/mem_encrypt.h
+++ b/arch/x86/include/asm/mem_encrypt.h
@@ -15,16 +15,24 @@
 
 #ifndef __ASSEMBLY__
 
+#include 
+
 #ifdef CONFIG_AMD_MEM_ENCRYPT
 
 extern unsigned long sme_me_mask;
 
+void __init sme_enable(void);
+
 #else  /* !CONFIG_AMD_MEM_ENCRYPT */
 
 #define sme_me_mask0UL
 
+static inline void __init sme_enable(void) { }
+
 #endif /* CONFIG_AMD_MEM_ENCRYPT */
 
+unsigned long sme_get_me_mask(void);
+
 #endif /* __ASSEMBLY__ */
 
 #endif /* __X86_MEM_ENCRYPT_H__ */
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 2b2ac38..95979c3 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -14,6 +14,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -46,6 +47,7 @@ static void __init *fixup_pointer(void *ptr, unsigned long 
physaddr)
 void __init __startup_64(unsigned long physaddr)
 {
unsigned long load_delta, *p;
+   unsigned long pgtable_flags;
pgdval_t *pgd;
p4dval_t *p4d;
pudval_t *pud;
@@ -66,6 +68,12 @@ void __init __startup_64(unsigned long physaddr)
if (load_delta & ~PMD_PAGE_MASK)
for (;;);
 
+   /* Activate Secure Memory Encryption (SME) if supported and enabled */
+   sme_enable();
+
+   /* Include the SME encryption mask in the fixup value */
+   load_delta += sme_get_me_mask();
+
/* Fixup the physical addresses in the page table */
 
pgd = fixup_pointer(_top_pgt, physaddr);
@@ -92,28 +100,30 @@ void __init __startup_64(unsigned long physaddr)
 
pud = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);
pmd = fixup_pointer(early_dynamic_pgts[next_early_pgt++], physaddr);
+   pgtable_flags = _KERNPG_TABLE + sme_get_me_mask();
 
if (IS_ENABLED(CONFIG_X86_5LEVEL)) {
p4d = fixup_pointer(early_dynamic_pgts[next_early_pgt++], 
physaddr);
 
i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD;
-   pgd[i + 0] = (pgdval_t)p4d + _KERNPG_TABLE;
-   pgd[i + 1] = (pgdval_t)p4d + _KERNPG_TABLE;
+   pgd[i + 0] = (pgdval_t)p4d + pgtable_flags;
+   pgd[i + 1] = (pgdval_t)p4d + pgtable_flags;
 
i = (physaddr >> P4D_SHIFT) % PTRS_PER_P4D;
-   p4d[i + 0] = (pgdval_t)pud + _KERNPG_TABLE;
-   p4d[i + 1] = (pgdval_t)pud + _KERNPG_TABLE;
+   p4d[i + 0] = (pgdval_t)pud + pgtable_flags;
+   p4d[i + 1] = (pgdval_t)pud + pgtable_flags;
} else {
i = (physaddr >> PGDIR_SHIFT) % PTRS_PER_PGD;
-   pgd[i + 0] = (pgdval_t)pud + _KERNPG_TABLE;
-   pgd[i + 1] = (pgdval_t)pud + _KERNPG_TABLE;
+   pgd[i + 0] = (pgdval_t)pud + pgtable_flags;
+   pgd[i + 1] = (pgdval_t)pud + pgtable_flags;
}
 
i = (physaddr >> PUD_SHIFT) % PTRS_PER_PUD;
-   pud[i + 0] = (pudval_t)pmd + _KERNPG_TABLE;
-   pud[i + 1] = (pudval_t)pmd + _KERNPG_TABLE;
+   pud[i + 0] = (pudval_t)pmd + pgtable_flags;
+   pud[i + 1] = (pudval_t)pmd + pgtable_flags;
 
pmd_entry = __PAGE_KERNEL_LARGE_EXEC & ~_PAGE_GLOBAL;
+   pmd_entry += sme_get_me_mask();
pmd_entry +=  physaddr;
 
for (i = 0; i < DIV_ROUND_UP(_end - _text, PMD_SIZE); i++) {
@@ -134,9 +144,12 @@ void __init __startup_64(unsigned long physaddr)
pmd[i] += load_delta;
}
 
-   /* Fixup phys_base */
+   /*
+* Fixup phys_base - remove the memory encryption mask to obtain
+* the true physical address.
+*/
p = fixup_pointer(_base, physaddr);
-   *p += load_delta;
+   *p += load_delta - sme_get_me_mask();
 }
 
 /* Wipe all early page tables except for the kernel symbol map */
diff