Re: [PATCH v5 09/32] x86/mm: Provide general kernel support for memory encryption
On Thu, May 04, 2017 at 09:34:09AM -0500, Tom Lendacky wrote: > I masked it out here based on a previous comment from Dave Hansen: > > http://marc.info/?l=linux-kernel=148778719826905=2 > > I could move the __sme_clr into the individual defines of: Nah, it is fine as it is. I was just wondering... Thanks. -- Regards/Gruss, Boris. Good mailing practices for 400: avoid top-posting and trim the reply. -- To unsubscribe from this list: send the line "unsubscribe linux-efi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v5 09/32] x86/mm: Provide general kernel support for memory encryption
On 4/27/2017 11:12 AM, Borislav Petkov wrote: On Tue, Apr 18, 2017 at 04:17:54PM -0500, Tom Lendacky wrote: Changes to the existing page table macros will allow the SME support to be enabled in a simple fashion with minimal changes to files that use these macros. Since the memory encryption mask will now be part of the regular pagetable macros, we introduce two new macros (_PAGE_TABLE_NOENC and _KERNPG_TABLE_NOENC) to allow for early pagetable creation/initialization without the encryption mask before SME becomes active. Two new pgprot() macros are defined to allow setting or clearing the page encryption mask. ... @@ -55,7 +57,7 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr, __phys_addr_symbol(__phys_reloc_hide((unsigned long)(x))) #ifndef __va -#define __va(x)((void *)((unsigned long)(x)+PAGE_OFFSET)) +#define __va(x)((void *)(__sme_clr(x) + PAGE_OFFSET)) #endif #define __boot_va(x) __va(x) diff --git a/arch/x86/include/asm/page_types.h b/arch/x86/include/asm/page_types.h index 7bd0099..fead0a5 100644 --- a/arch/x86/include/asm/page_types.h +++ b/arch/x86/include/asm/page_types.h @@ -15,7 +15,7 @@ #define PUD_PAGE_SIZE (_AC(1, UL) << PUD_SHIFT) #define PUD_PAGE_MASK (~(PUD_PAGE_SIZE-1)) -#define __PHYSICAL_MASK((phys_addr_t)((1ULL << __PHYSICAL_MASK_SHIFT) - 1)) +#define __PHYSICAL_MASK((phys_addr_t)(__sme_clr((1ULL << __PHYSICAL_MASK_SHIFT) - 1))) That looks strange: poking SME mask hole into a mask...? I masked it out here based on a previous comment from Dave Hansen: http://marc.info/?l=linux-kernel=148778719826905=2 I could move the __sme_clr into the individual defines of: PHYSICAL_PAGE_MASK, PHYSICAL_PMD_PAGE_MASK and PHYSICAL_PUD_PAGE_MASK Either way works for me. Thanks, Tom #define __VIRTUAL_MASK ((1UL << __VIRTUAL_MASK_SHIFT) - 1) /* Cast *PAGE_MASK to a signed type so that it is sign-extended if -- To unsubscribe from this list: send the line "unsubscribe linux-efi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v5 09/32] x86/mm: Provide general kernel support for memory encryption
On Tue, Apr 18, 2017 at 04:17:54PM -0500, Tom Lendacky wrote: > Changes to the existing page table macros will allow the SME support to > be enabled in a simple fashion with minimal changes to files that use these > macros. Since the memory encryption mask will now be part of the regular > pagetable macros, we introduce two new macros (_PAGE_TABLE_NOENC and > _KERNPG_TABLE_NOENC) to allow for early pagetable creation/initialization > without the encryption mask before SME becomes active. Two new pgprot() > macros are defined to allow setting or clearing the page encryption mask. ... > @@ -55,7 +57,7 @@ static inline void copy_user_page(void *to, void *from, > unsigned long vaddr, > __phys_addr_symbol(__phys_reloc_hide((unsigned long)(x))) > > #ifndef __va > -#define __va(x) ((void *)((unsigned > long)(x)+PAGE_OFFSET)) > +#define __va(x) ((void *)(__sme_clr(x) + PAGE_OFFSET)) > #endif > > #define __boot_va(x) __va(x) > diff --git a/arch/x86/include/asm/page_types.h > b/arch/x86/include/asm/page_types.h > index 7bd0099..fead0a5 100644 > --- a/arch/x86/include/asm/page_types.h > +++ b/arch/x86/include/asm/page_types.h > @@ -15,7 +15,7 @@ > #define PUD_PAGE_SIZE(_AC(1, UL) << PUD_SHIFT) > #define PUD_PAGE_MASK(~(PUD_PAGE_SIZE-1)) > > -#define __PHYSICAL_MASK ((phys_addr_t)((1ULL << > __PHYSICAL_MASK_SHIFT) - 1)) > +#define __PHYSICAL_MASK ((phys_addr_t)(__sme_clr((1ULL << > __PHYSICAL_MASK_SHIFT) - 1))) That looks strange: poking SME mask hole into a mask...? > #define __VIRTUAL_MASK ((1UL << __VIRTUAL_MASK_SHIFT) - 1) > > /* Cast *PAGE_MASK to a signed type so that it is sign-extended if -- Regards/Gruss, Boris. Good mailing practices for 400: avoid top-posting and trim the reply. -- To unsubscribe from this list: send the line "unsubscribe linux-efi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v5 09/32] x86/mm: Provide general kernel support for memory encryption
On 4/24/2017 10:57 AM, Dave Hansen wrote: On 04/24/2017 08:53 AM, Tom Lendacky wrote: On 4/21/2017 4:52 PM, Dave Hansen wrote: On 04/18/2017 02:17 PM, Tom Lendacky wrote: @@ -55,7 +57,7 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr, __phys_addr_symbol(__phys_reloc_hide((unsigned long)(x))) #ifndef __va -#define __va(x)((void *)((unsigned long)(x)+PAGE_OFFSET)) +#define __va(x)((void *)(__sme_clr(x) + PAGE_OFFSET)) #endif It seems wrong to be modifying __va(). It currently takes a physical address, and this modifies it to take a physical address plus the SME bits. This actually modifies it to be sure the encryption bit is not part of the physical address. If SME bits make it this far, we have a bug elsewhere. Right? Probably best not to paper over it. That all depends on the approach. Currently that's not the case for the one situation that you mentioned with cr3. But if we do take the approach that we should never feed physical addresses to __va() with the encryption bit set then, yes, it would imply a bug elsewhere - which is probably a good approach. I'll work on that. I could even add a debug config option that would issue a warning should __va() encounter the encryption bit if SME is enabled or active. Thanks, Tom -- To unsubscribe from this list: send the line "unsubscribe linux-efi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v5 09/32] x86/mm: Provide general kernel support for memory encryption
On 04/24/2017 08:53 AM, Tom Lendacky wrote: > On 4/21/2017 4:52 PM, Dave Hansen wrote: >> On 04/18/2017 02:17 PM, Tom Lendacky wrote: >>> @@ -55,7 +57,7 @@ static inline void copy_user_page(void *to, void >>> *from, unsigned long vaddr, >>> __phys_addr_symbol(__phys_reloc_hide((unsigned long)(x))) >>> >>> #ifndef __va >>> -#define __va(x)((void *)((unsigned long)(x)+PAGE_OFFSET)) >>> +#define __va(x)((void *)(__sme_clr(x) + PAGE_OFFSET)) >>> #endif >> >> It seems wrong to be modifying __va(). It currently takes a physical >> address, and this modifies it to take a physical address plus the SME >> bits. > > This actually modifies it to be sure the encryption bit is not part of > the physical address. If SME bits make it this far, we have a bug elsewhere. Right? Probably best not to paper over it. -- To unsubscribe from this list: send the line "unsubscribe linux-efi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v5 09/32] x86/mm: Provide general kernel support for memory encryption
On 4/21/2017 4:52 PM, Dave Hansen wrote: On 04/18/2017 02:17 PM, Tom Lendacky wrote: @@ -55,7 +57,7 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr, __phys_addr_symbol(__phys_reloc_hide((unsigned long)(x))) #ifndef __va -#define __va(x)((void *)((unsigned long)(x)+PAGE_OFFSET)) +#define __va(x)((void *)(__sme_clr(x) + PAGE_OFFSET)) #endif It seems wrong to be modifying __va(). It currently takes a physical address, and this modifies it to take a physical address plus the SME bits. This actually modifies it to be sure the encryption bit is not part of the physical address. How does that end up ever happening? If we are pulling physical addresses out of the page tables, we use p??_phys(). I'd expect *those* to be masking off the SME bits. Is it these cases? pgd_t *base = __va(read_cr3()); For those, it seems like we really want to create two modes of reading cr3. One that truly reads CR3 and another that reads the pgd's physical address out of CR3. Then you only do the SME masking on the one fetching a physical address, and the SME bits never leak into __va(). I'll investigate this and see if I can remove the mod to __va(). Thanks, Tom -- To unsubscribe from this list: send the line "unsubscribe linux-efi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v5 09/32] x86/mm: Provide general kernel support for memory encryption
On 04/18/2017 02:17 PM, Tom Lendacky wrote: > @@ -55,7 +57,7 @@ static inline void copy_user_page(void *to, void *from, > unsigned long vaddr, > __phys_addr_symbol(__phys_reloc_hide((unsigned long)(x))) > > #ifndef __va > -#define __va(x) ((void *)((unsigned > long)(x)+PAGE_OFFSET)) > +#define __va(x) ((void *)(__sme_clr(x) + PAGE_OFFSET)) > #endif It seems wrong to be modifying __va(). It currently takes a physical address, and this modifies it to take a physical address plus the SME bits. How does that end up ever happening? If we are pulling physical addresses out of the page tables, we use p??_phys(). I'd expect *those* to be masking off the SME bits. Is it these cases? pgd_t *base = __va(read_cr3()); For those, it seems like we really want to create two modes of reading cr3. One that truly reads CR3 and another that reads the pgd's physical address out of CR3. Then you only do the SME masking on the one fetching a physical address, and the SME bits never leak into __va(). -- To unsubscribe from this list: send the line "unsubscribe linux-efi" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html