Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-21 Thread Ross Philipson
On 10/21/20 12:18 PM, Arvind Sankar wrote:
> On Wed, Oct 21, 2020 at 05:28:33PM +0200, Daniel Kiper wrote:
>> On Mon, Oct 19, 2020 at 01:18:22PM -0400, Arvind Sankar wrote:
>>> On Mon, Oct 19, 2020 at 04:51:53PM +0200, Daniel Kiper wrote:
 On Fri, Oct 16, 2020 at 04:51:51PM -0400, Arvind Sankar wrote:
> On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
>>
>> I am discussing with Ross the other option. We can create
>> .rodata.mle_header section and put it at fixed offset as
>> kernel_info is. So, we would have, e.g.:
>>
>> arch/x86/boot/compressed/vmlinux.lds.S:
>> .rodata.kernel_info KERNEL_INFO_OFFSET : {
>> *(.rodata.kernel_info)
>> }
>> ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info 
>> at bad address!")
>>
>> .rodata.mle_header MLE_HEADER_OFFSET : {
>> *(.rodata.mle_header)
>> }
>> ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header at 
>> bad address!")
>>
>> arch/x86/boot/compressed/sl_stub.S:
>> #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)
>>
>> .section ".rodata.mle_header", "a"
>>
>> SYM_DATA_START(mle_header)
>> .long   0x9082ac5a/* UUID0 */
>> .long   0x74a7476f/* UUID1 */
>> .long   0xa2555c0f/* UUID2 */
>> .long   0x42b651cb/* UUID3 */
>> .long   0x0034/* MLE header size */
>> .long   0x00020002/* MLE version 2.2 */
>> .long   mleh_rva(sl_stub_entry)/* Linear entry point of MLE 
>> (virt. address) */
>> .long   0x/* First valid page of MLE */
>> .long   0x/* Offset within binary of first byte of 
>> MLE */
>> .long   0x/* Offset within binary of last byte + 1 
>> of MLE */
>> .long   0x0223/* Bit vector of MLE-supported 
>> capabilities */
>> .long   0x/* Starting linear address of command line 
>> (unused) */
>> .long   0x/* Ending linear address of command line 
>> (unused) */
>> SYM_DATA_END(mle_header)
>>
>> Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
>> Anyway, is it acceptable?

 What do you think about my MLE_HEADER_OFFSET and related stuff proposal?

>>>
>>> I'm wondering if it would be easier to just allow relocations in these
>>> special "header" sections. I need to check how easy/hard it is to do
>>> that without triggering linker warnings.
>>
>> Ross and I still bouncing some ideas. We came to the conclusion that
>> putting mle_header into kernel .rodata.kernel_info section or even
>> arch/x86/boot/compressed/kernel_info.S file would be the easiest thing
>> to do at this point. Of course I would suggest some renaming too. E.g.
>> .rodata.kernel_info to .rodata.kernel_headers, etc. Does it make sense
>> for you?
>>
>> Daniel
> 
> I haven't been able to come up with any different options that don't
> require post-processing of the kernel image. Allowing relocations in
> specific sections seems to not be possible with lld, and anyway would
> require the fields to be 64-bit sized so it doesn't really help.
> 
> Putting mle_header into kernel_info seems like a reasonable thing to me,
> and if you do that, putting it into kernel_info.S would seem to be
> necessary?  Would you also have a fixed field with the offset of the

That seems like a reasonable place for it to go.

> mle_header from kernel_info?  That seems nicer than having the
> bootloader scan the variable data for magic strings.

Yes kernel_info will have a field to the offset of the mle_header. I
agree that would be nicer.

Thanks
Ross

> 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-19 Thread Ross Philipson
On 10/19/20 1:06 PM, Arvind Sankar wrote:
> On Mon, Oct 19, 2020 at 10:38:08AM -0400, Ross Philipson wrote:
>> On 10/16/20 4:51 PM, Arvind Sankar wrote:
>>> On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
>>>>
>>>> I am discussing with Ross the other option. We can create
>>>> .rodata.mle_header section and put it at fixed offset as
>>>> kernel_info is. So, we would have, e.g.:
>>>>
>>>> arch/x86/boot/compressed/vmlinux.lds.S:
>>>> .rodata.kernel_info KERNEL_INFO_OFFSET : {
>>>> *(.rodata.kernel_info)
>>>> }
>>>> ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info 
>>>> at bad address!")
>>>>
>>>> .rodata.mle_header MLE_HEADER_OFFSET : {
>>>> *(.rodata.mle_header)
>>>> }
>>>> ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header at 
>>>> bad address!")
>>>>
>>>> arch/x86/boot/compressed/sl_stub.S:
>>>> #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)
>>>>
>>>> .section ".rodata.mle_header", "a"
>>>>
>>>> SYM_DATA_START(mle_header)
>>>> .long   0x9082ac5a/* UUID0 */
>>>> .long   0x74a7476f/* UUID1 */
>>>> .long   0xa2555c0f/* UUID2 */
>>>> .long   0x42b651cb/* UUID3 */
>>>> .long   0x0034/* MLE header size */
>>>> .long   0x00020002/* MLE version 2.2 */
>>>> .long   mleh_rva(sl_stub_entry)/* Linear entry point of MLE 
>>>> (virt. address) */
>>>> .long   0x/* First valid page of MLE */
>>>> .long   0x/* Offset within binary of first byte of MLE 
>>>> */
>>>> .long   0x/* Offset within binary of last byte + 1 of 
>>>> MLE */
>>>> .long   0x0223/* Bit vector of MLE-supported capabilities 
>>>> */
>>>> .long   0x/* Starting linear address of command line 
>>>> (unused) */
>>>> .long   0x/* Ending linear address of command line 
>>>> (unused) */
>>>> SYM_DATA_END(mle_header)
>>>>
>>>> Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
>>>> Anyway, is it acceptable?
>>>>
>>>> There is also another problem. We have to put into mle_header size of
>>>> the Linux kernel image. Currently it is done by the bootloader but
>>>> I think it is not a role of the bootloader. The kernel image should
>>>> provide all data describing its properties and do not rely on the
>>>> bootloader to do that. Ross and I investigated various options but we
>>>> did not find a good/simple way to do that. Could you suggest how we
>>>> should do that or at least where we should take a look to get some
>>>> ideas?
>>>>
>>>> Daniel
>>>
>>> What exactly is the size you need here? Is it just the size of the
>>> protected mode image, that's startup_32 to _edata. Or is it the size of
>>
>> It is the size of the protected mode image. Though how to reference
>> those symbols to get the size might all more relocation issues.
>>
> 
> Ok, then I think mleh_rva(_edata) should get you that -- I assume you
> don't want to include the uninitialized data in the size? The kernel
> will access memory beyond _edata (upto the init_size in the setup
> header), but that's not part of the image itself.

Yea we basically want the size of the image. There is nothing to measure
beyond the image as loaded into memory by the bootloader. rva(_edata)
seems to be the ticket.

Thanks
Ross

> 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-10-19 Thread Ross Philipson
On 10/16/20 4:51 PM, Arvind Sankar wrote:
> On Thu, Oct 15, 2020 at 08:26:54PM +0200, Daniel Kiper wrote:
>>
>> I am discussing with Ross the other option. We can create
>> .rodata.mle_header section and put it at fixed offset as
>> kernel_info is. So, we would have, e.g.:
>>
>> arch/x86/boot/compressed/vmlinux.lds.S:
>> .rodata.kernel_info KERNEL_INFO_OFFSET : {
>> *(.rodata.kernel_info)
>> }
>> ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info at 
>> bad address!")
>>
>> .rodata.mle_header MLE_HEADER_OFFSET : {
>> *(.rodata.mle_header)
>> }
>> ASSERT(ABSOLUTE(mle_header) == MLE_HEADER_OFFSET, "mle_header at bad 
>> address!")
>>
>> arch/x86/boot/compressed/sl_stub.S:
>> #define mleh_rva(X) (((X) - mle_header) + MLE_HEADER_OFFSET)
>>
>> .section ".rodata.mle_header", "a"
>>
>> SYM_DATA_START(mle_header)
>> .long   0x9082ac5a/* UUID0 */
>> .long   0x74a7476f/* UUID1 */
>> .long   0xa2555c0f/* UUID2 */
>> .long   0x42b651cb/* UUID3 */
>> .long   0x0034/* MLE header size */
>> .long   0x00020002/* MLE version 2.2 */
>> .long   mleh_rva(sl_stub_entry)/* Linear entry point of MLE 
>> (virt. address) */
>> .long   0x/* First valid page of MLE */
>> .long   0x/* Offset within binary of first byte of MLE */
>> .long   0x/* Offset within binary of last byte + 1 of 
>> MLE */
>> .long   0x0223/* Bit vector of MLE-supported capabilities */
>> .long   0x/* Starting linear address of command line 
>> (unused) */
>> .long   0x/* Ending linear address of command line 
>> (unused) */
>> SYM_DATA_END(mle_header)
>>
>> Of course MLE_HEADER_OFFSET has to be defined as a constant somewhere.
>> Anyway, is it acceptable?
>>
>> There is also another problem. We have to put into mle_header size of
>> the Linux kernel image. Currently it is done by the bootloader but
>> I think it is not a role of the bootloader. The kernel image should
>> provide all data describing its properties and do not rely on the
>> bootloader to do that. Ross and I investigated various options but we
>> did not find a good/simple way to do that. Could you suggest how we
>> should do that or at least where we should take a look to get some
>> ideas?
>>
>> Daniel
> 
> What exactly is the size you need here? Is it just the size of the
> protected mode image, that's startup_32 to _edata. Or is it the size of

It is the size of the protected mode image. Though how to reference
those symbols to get the size might all more relocation issues.

> the whole bzImage file, or something else? I guess the same question
> applies to "first valid page of MLE" and "first byte of MLE", and the

Because of the way the launch environment is setup, both "first valid
page of MLE" and "first byte of MLE" are always zero so nothing to have
to sort out there. The only fields that need to be updated are "Linear
entry point of MLE" and "Offset within binary of last byte + 1 of MLE".

Thanks
Ross

> linear entry point -- are those all relative to startup_32 or do they
> need to be relative to the start of the bzImage, i.e. you have to add
> the size of the real-mode boot stub?
> 
> If you need to include the size of the bzImage file, that's not known
> when the files in boot/compressed are built. It's only known after the
> real-mode stub is linked. arch/x86/boot/tools/build.c fills in various
> details in the setup header and creates the bzImage file, but it does
> not currently modify anything in the protected-mode portion of the
> compressed kernel (i.e. arch/x86/boot/compressed/vmlinux, which then
> gets converted to binary format as arch/x86/boot/vmlinux.bin), so it
> would need to be extended if you need to modify the MLE header to
> include the bzImage size or anything depending on the size of the
> real-mode stub.
> 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-09-29 Thread Ross Philipson
On 9/25/20 3:18 PM, Arvind Sankar wrote:
> On Fri, Sep 25, 2020 at 10:56:43AM -0400, Ross Philipson wrote:
>> On 9/24/20 1:38 PM, Arvind Sankar wrote:
>>> On Thu, Sep 24, 2020 at 10:58:35AM -0400, Ross Philipson wrote:
>>>
>>>> diff --git a/arch/x86/boot/compressed/head_64.S 
>>>> b/arch/x86/boot/compressed/head_64.S
>>>> index 97d37f0..42043bf 100644
>>>> --- a/arch/x86/boot/compressed/head_64.S
>>>> +++ b/arch/x86/boot/compressed/head_64.S
>>>> @@ -279,6 +279,21 @@ SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
>>>>  SYM_FUNC_END(efi32_stub_entry)
>>>>  #endif
>>>>  
>>>> +#ifdef CONFIG_SECURE_LAUNCH
>>>> +SYM_FUNC_START(sl_stub_entry)
>>>> +  /*
>>>> +   * On entry, %ebx has the entry abs offset to sl_stub_entry. To
>>>> +   * find the beginning of where we are loaded, sub off from the
>>>> +   * beginning.
>>>> +   */
>>>
>>> This requirement should be added to the documentation. Is it necessary
>>> or can this stub just figure out the address the same way as the other
>>> 32-bit entry points, using the scratch space in bootparams as a little
>>> stack?
>>
>> It is based on the state of the BSP when TXT vectors to the measured
>> launch environment. It is documented in the TXT spec and the SDMs.
>>
> 
> I think it would be useful to add to the x86 boot documentation how
> exactly this new entry point is called, even if it's just adding a link
> to some section of those specs. The doc should also say that an
> mle_header_offset of 0 means the kernel isn't secure launch enabled.

Ok will do.

> 
>>>
>>> For the 32-bit assembler code that's being added, tip/master now has
>>> changes that prevent the compressed kernel from having any runtime
>>> relocations.  You'll need to revise some of the code and the data
>>> structures initial values to avoid creating relocations.
>>
>> Could you elaborate on this some more? I am not sure I see places in the
>> secure launch asm that would be creating relocations like this.
>>
>> Thank you,
>> Ross
>>
> 
> You should see them if you do
>   readelf -r arch/x86/boot/compressed/vmlinux
> 
> In terms of the code, things like:
> 
>   addl%ebx, (sl_gdt_desc + 2)(%ebx)
> 
> will create a relocation, because the linker interprets this as wanting
> the runtime address of sl_gdt_desc, rather than just the offset from
> startup_32.
> 
> https://urldefense.com/v3/__https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git/tree/arch/x86/boot/compressed/head_64.S*n48__;Iw!!GqivPVa7Brio!JpZWv1cCPZdjD2jbCCGT7P9UIVl_lhX7YjckAnUcvi927jwZI7X3nX0MpIAZOyktJds$
>  
> 
> has a comment with some explanation and a macro that the 32-bit code in
> startup_32 uses to avoid creating relocations.
> 
> Since the SL code is in a different assembler file (and a different
> section), you can't directly use the same macro. I would suggest getting
> rid of sl_stub_entry and entering directly at sl_stub, and then the code
> in sl_stub.S can use sl_stub for the base address, defining the rva()
> macro there as
> 
>   #define rva(X) ((X) - sl_stub)
> 
> You will also need to avoid initializing data with symbol addresses.
> 
>   .long mle_header
>   .long sl_stub_entry
>   .long sl_gdt
> 
> will create relocations. The third one is easy, just replace it with
> sl_gdt - sl_gdt_desc and initialize it at runtime with
> 
>   lealrva(sl_gdt_desc)(%ebx), %eax
>   addl%eax, 2(%eax)
>   lgdt(%eax)
> 
> The other two are more messy, unfortunately there is no easy way to tell
> the linker what we want here. The other entry point addresses (for the
> EFI stub) are populated in a post-processing step after the compressed
> kernel has been linked, we could teach it to also update kernel_info.
> 
> Without that, for kernel_info, you could change it to store the offset
> of the MLE header from kernel_info, instead of from the start of the
> image.
> 
> For the MLE header, it could be moved to .head.text in head_64.S, and
> initialized with
>   .long rva(sl_stub)
> This will also let it be placed at a fixed offset from startup_32, so
> that kernel_info can just be populated with a constant.

Thank you for the detailed reply. I am going to start digging into this now.

Ross

> 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 01/13] x86: Secure Launch Kconfig

2020-09-25 Thread Ross Philipson
On 9/24/20 10:08 PM, Randy Dunlap wrote:
> On 9/24/20 7:58 AM, Ross Philipson wrote:
>> Initial bits to bring in Secure Launch functionality. Add Kconfig
>> options for compiling in/out the Secure Launch code.
>>
>> Signed-off-by: Ross Philipson 
> 
> Hi,
> from Documentation/process/coding-style.rst:
> 
> Lines under a ``config`` definition
> are indented with one tab, while help text is indented an additional two
> spaces.

Ok sorry about that. I probably just copied what the previous entry was
doing. Will fix.

Thanks
Ross

> 
>> ---
>>  arch/x86/Kconfig | 36 
>>  1 file changed, 36 insertions(+)
>>
>> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
>> index 7101ac6..8957981 100644
>> --- a/arch/x86/Kconfig
>> +++ b/arch/x86/Kconfig
>> @@ -1968,6 +1968,42 @@ config EFI_MIXED
>>  
>> If unsure, say N.
>>  
>> +config SECURE_LAUNCH
>> +bool "Secure Launch support"
>> +default n
>> +depends on X86_64
>> +help
>> +   The Secure Launch feature allows a kernel to be loaded
>> +   directly through an Intel TXT measured launch. Intel TXT
>> +   establishes a Dynamic Root of Trust for Measurement (DRTM)
>> +   where the CPU measures the kernel image. This feature then
>> +   continues the measurement chain over kernel configuration
>> +   information and init images.
>> +
>> +choice
>> +prompt "Select Secure Launch Algorithm for TPM2"
>> +depends on SECURE_LAUNCH
>> +
>> +config SECURE_LAUNCH_SHA1
>> +bool "Secure Launch TPM1 SHA1"
>> +help
>> +   When using Secure Launch and TPM1 is present, use SHA1 hash
>> +   algorithm for measurements.
>> +
>> +config SECURE_LAUNCH_SHA256
>> +bool "Secure Launch TPM2 SHA256"
>> +help
>> +   When using Secure Launch and TPM2 is present, use SHA256 hash
>> +   algorithm for measurements.
>> +
>> +config SECURE_LAUNCH_SHA512
>> +bool "Secure Launch TPM2 SHA512"
>> +help
>> +   When using Secure Launch and TPM2 is present, use SHA512 hash
>> +   algorithm for measurements.
>> +
>> +endchoice
>> +
> 
> 
> thanks.
> 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-09-25 Thread Ross Philipson
On 9/24/20 1:38 PM, Arvind Sankar wrote:
> On Thu, Sep 24, 2020 at 10:58:35AM -0400, Ross Philipson wrote:
>> The Secure Launch (SL) stub provides the entry point for Intel TXT (and
>> later AMD SKINIT) to vector to during the late launch. The symbol
>> sl_stub_entry is that entry point and its offset into the kernel is
>> conveyed to the launching code using the MLE (Measured Launch
>> Environment) header in the structure named mle_header. The offset of the
>> MLE header is set in the kernel_info. The routine sl_stub contains the
>> very early late launch setup code responsible for setting up the basic
>> environment to allow the normal kernel startup_32 code to proceed. It is
>> also responsible for properly waking and handling the APs on Intel
>> platforms. The routine sl_main which runs after entering 64b mode is
>> responsible for measuring configuration and module information before
>> it is used like the boot params, the kernel command line, the TXT heap,
>> an external initramfs, etc.
>>
>> Signed-off-by: Ross Philipson 
> 
> Which version of the kernel is this based on?

git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git

master branch

> 
>> diff --git a/arch/x86/boot/compressed/head_64.S 
>> b/arch/x86/boot/compressed/head_64.S
>> index 97d37f0..42043bf 100644
>> --- a/arch/x86/boot/compressed/head_64.S
>> +++ b/arch/x86/boot/compressed/head_64.S
>> @@ -279,6 +279,21 @@ SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
>>  SYM_FUNC_END(efi32_stub_entry)
>>  #endif
>>  
>> +#ifdef CONFIG_SECURE_LAUNCH
>> +SYM_FUNC_START(sl_stub_entry)
>> +/*
>> + * On entry, %ebx has the entry abs offset to sl_stub_entry. To
>> + * find the beginning of where we are loaded, sub off from the
>> + * beginning.
>> + */
> 
> This requirement should be added to the documentation. Is it necessary
> or can this stub just figure out the address the same way as the other
> 32-bit entry points, using the scratch space in bootparams as a little
> stack?

It is based on the state of the BSP when TXT vectors to the measured
launch environment. It is documented in the TXT spec and the SDMs.

> 
>> +leal(startup_32 - sl_stub_entry)(%ebx), %ebx
>> +
>> +/* More room to work in sl_stub in the text section */
>> +jmp sl_stub
>> +
>> +SYM_FUNC_END(sl_stub_entry)
>> +#endif
>> +
>>  .code64
>>  .org 0x200
>>  SYM_CODE_START(startup_64)
>> @@ -537,6 +552,25 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
>>  shrq$3, %rcx
>>  rep stosq
>>  
>> +#ifdef CONFIG_SECURE_LAUNCH
>> +/*
>> + * Have to do the final early sl stub work in 64b area.
>> + *
>> + * *** NOTE ***
>> + *
>> + * Several boot params get used before we get a chance to measure
>> + * them in this call. This is a known issue and we currently don't
>> + * have a solution. The scratch field doesn't matter and loadflags
>> + * have KEEP_SEGMENTS set by the stub code. There is no obvious way
>> + * to do anything about the use of kernel_alignment or init_size
>> + * though these seem low risk.
>> + */
> 
> There are various fields in bootparams that depend on where the
> kernel/initrd and cmdline are loaded in memory. If the entire bootparams
> page is getting measured, does that mean they all have to be at fixed
> addresses on every boot?

Yes that is a very good point. In other places when measuring we make
sure to skip things like addresses and sizes of things outside of the
structure being measured. This needs to be done with boot params too.

> 
> Also KEEP_SEGMENTS support is gone from the kernel since v5.7, since it
> was unused. startup_32 now always loads a GDT and then the segment
> registers. I think this should be ok for you as the only thing the flag
> used to do in the 64-bit kernel was to stop startup_32 from blindly
> loading __BOOT_DS into the segment registers before it had setup its own
> GDT.

Yea this was there to prevent that blind loading of __BOOT_DS. I see it
is gone so I will remove the comment and the place where the flag is set.

> 
> For the 32-bit assembler code that's being added, tip/master now has
> changes that prevent the compressed kernel from having any runtime
> relocations.  You'll need to revise some of the code and the data
> structures initial values to avoid creating relocations.

Could you elaborate on this some more? I am not sure I see places in the
secure launch asm that would be creating relocations like this.

Thank you,
Ross

> 
> Thanks.
> 

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 08/13] x86: Secure Launch kernel late boot stub

2020-09-24 Thread Ross Philipson
The routine slaunch_setup is called out of the x86 specific setup_arch
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.

For TXT, this code also reserves the original compressed kernel setup
area where the APs were left looping so that this memory cannot be used.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/setup.c|   3 +
 arch/x86/kernel/slaunch.c  | 495 +
 drivers/iommu/intel/dmar.c |   4 +
 4 files changed, 503 insertions(+)
 create mode 100644 arch/x86/kernel/slaunch.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index e77261d..318366f 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -76,6 +76,7 @@ obj-$(CONFIG_X86_32)  += tls.o
 obj-$(CONFIG_IA32_EMULATION)   += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-$(CONFIG_STACKTRACE)   += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 3511736..cae9476 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -18,6 +18,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #include 
@@ -1009,6 +1010,8 @@ void __init setup_arch(char **cmdline_p)
early_gart_iommu_check();
 #endif
 
+   slaunch_setup();
+
/*
 * partially used pages are not usable - thus
 * we are rounding upwards:
diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
new file mode 100644
index 000..e040e32
--- /dev/null
+++ b/arch/x86/kernel/slaunch.c
@@ -0,0 +1,495 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup, securityfs exposure and
+ * finalization support.
+ *
+ * Copyright (c) 2020, Oracle and/or its affiliates.
+ * Copyright (c) 2020 Apertus Solutions, LLC
+ *
+ * Author(s):
+ * Daniel P. Smith 
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+static u32 sl_flags;
+static struct sl_ap_wake_info ap_wake_info;
+static u64 evtlog_addr;
+static u32 evtlog_size;
+static u64 vtd_pmr_lo_size;
+
+/* This should be plenty of room */
+static u8 txt_dmar[PAGE_SIZE] __aligned(16);
+
+u32 slaunch_get_flags(void)
+{
+   return sl_flags;
+}
+EXPORT_SYMBOL(slaunch_get_flags);
+
+struct sl_ap_wake_info *slaunch_get_ap_wake_info(void)
+{
+   return _wake_info;
+}
+
+struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table_header 
*dmar)
+{
+   /* The DMAR is only stashed and provided via TXT on Intel systems */
+   if (memcmp(txt_dmar, "DMAR", 4))
+   return dmar;
+
+   return (struct acpi_table_header *)(_dmar[0]);
+}
+
+static void __init __noreturn slaunch_txt_reset(void __iomem *txt,
+   const char *msg, u64 error)
+{
+   u64 one = 1, val;
+
+   pr_err("%s", msg);
+
+   /*
+* This performs a TXT reset with a sticky error code. The reads of
+* TXT_CR_E2STS act as barriers.
+*/
+   memcpy_toio(txt + TXT_CR_ERRORCODE, , sizeof(u64));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(u64));
+   memcpy_toio(txt + TXT_CR_CMD_NO_SECRETS, , sizeof(u64));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(u64));
+   memcpy_toio(txt + TXT_CR_CMD_UNLOCK_MEM_CONFIG, , sizeof(u64));
+   memcpy_fromio(, txt + TXT_CR_E2STS, sizeof(u64));
+   memcpy_toio(txt + TXT_CR_CMD_RESET, , sizeof(u64));
+
+   for ( ; ; )
+   asm volatile ("hlt");
+
+   unreachable();
+}
+
+/*
+ * The TXT heap is too big to map all at once with early_ioremap
+ * so it is done a table at a time.
+ */
+static void __init *txt_early_get_heap_table(void __iomem *txt, u32 type,
+u32 bytes)
+{
+   void *heap;
+   u64 base, size, offset = 0;
+   int i;
+
+   if (type > TXT_SINIT_MLE_DATA_TABLE)
+   slaunch_txt_reset(txt,
+   "Error invalid table type for early heap walk\n",
+   SL_ERROR_HEAP_WALK);
+
+   memcpy_fromio(, txt + TXT_CR_HEAP_BASE, sizeof(u64));
+   memcpy_fromio(, txt + TXT_CR_HEAP_SIZE, sizeof(u64));
+
+   /* Iterate over heap tables looking for table of "type" */
+   for (i = 0; i < type; i++) {
+   base += offset;
+   heap = early_memremap(base, sizeof(u64));
+ 

[PATCH 04/13] x86: Add early TPM TIS/CRB interface support for Secure Launch

2020-09-24 Thread Ross Philipson
From: "Daniel P. Smith" 

The Secure Launch capability that is part of the compressed kernel
requires the ability to send measurements it takes to the TPM. This
commit introduces the necessary code to communicate with the hardware
interface of TPM devices.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile |   2 +
 arch/x86/boot/compressed/tpm/crb.c| 304 ++
 arch/x86/boot/compressed/tpm/crb.h|  20 ++
 arch/x86/boot/compressed/tpm/tis.c| 215 +
 arch/x86/boot/compressed/tpm/tis.h|  46 +
 arch/x86/boot/compressed/tpm/tpm.h|  48 +
 arch/x86/boot/compressed/tpm/tpm_buff.c   | 121 
 arch/x86/boot/compressed/tpm/tpm_common.h | 127 +
 arch/x86/boot/compressed/tpm/tpmbuff.h|  34 
 arch/x86/boot/compressed/tpm/tpmio.c  |  51 +
 10 files changed, 968 insertions(+)
 create mode 100644 arch/x86/boot/compressed/tpm/crb.c
 create mode 100644 arch/x86/boot/compressed/tpm/crb.h
 create mode 100644 arch/x86/boot/compressed/tpm/tis.c
 create mode 100644 arch/x86/boot/compressed/tpm/tis.h
 create mode 100644 arch/x86/boot/compressed/tpm/tpm.h
 create mode 100644 arch/x86/boot/compressed/tpm/tpm_buff.c
 create mode 100644 arch/x86/boot/compressed/tpm/tpm_common.h
 create mode 100644 arch/x86/boot/compressed/tpm/tpmbuff.h
 create mode 100644 arch/x86/boot/compressed/tpm/tpmio.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 0fd84b9..5515afa 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -99,6 +99,8 @@ efi-obj-$(CONFIG_EFI_STUB) = 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o
 vmlinux-objs-$(CONFIG_SECURE_LAUNCH_SHA256) += $(obj)/early_sha256.o
 vmlinux-objs-$(CONFIG_SECURE_LAUNCH_SHA512) += $(obj)/early_sha512.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/tpm/tpmio.o 
$(obj)/tpm/tpm_buff.o \
+   $(obj)/tpm/tis.o $(obj)/tpm/crb.o
 
 # The compressed kernel is built with -fPIC/-fPIE so that a boot loader
 # can place it anywhere in memory and it will still run. However, since
diff --git a/arch/x86/boot/compressed/tpm/crb.c 
b/arch/x86/boot/compressed/tpm/crb.c
new file mode 100644
index 000..05c4038
--- /dev/null
+++ b/arch/x86/boot/compressed/tpm/crb.c
@@ -0,0 +1,304 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2020 Apertus Solutions, LLC
+ *
+ * Author(s):
+ *  Daniel P. Smith 
+ */
+
+#include 
+#include "tpm.h"
+#include "tpmbuff.h"
+#include "crb.h"
+#include "tpm_common.h"
+
+#define TPM_LOC_STATE  0x
+#define TPM_LOC_CTRL   0x0008
+#define TPM_LOC_STS0x000C
+#define TPM_CRB_INTF_ID0x0030
+#define TPM_CRB_CTRL_EXT   0x0038
+#define TPM_CRB_CTRL_REQ   0x0040
+#define TPM_CRB_CTRL_STS   0x0044
+#define TPM_CRB_CTRL_CANCEL0x0048
+#define TPM_CRB_CTRL_START 0x004C
+#define TPM_CRB_INT_ENABLE 0x0050
+#define TPM_CRB_INT_STS0x0054
+#define TPM_CRB_CTRL_CMD_SIZE  0x0058
+#define TPM_CRB_CTRL_CMD_LADDR 0x005C
+#define TPM_CRB_CTRL_CMD_HADDR 0x0060
+#define TPM_CRB_CTRL_RSP_SIZE  0x0064
+#define TPM_CRB_CTRL_RSP_ADDR  0x0068
+#define TPM_CRB_DATA_BUFFER0x0080
+
+#define REGISTER(l, r) (((l) << 12) | (r))
+
+static u8 locality = TPM_NO_LOCALITY;
+
+struct tpm_loc_state {
+   union {
+   u8 val;
+   struct {
+   u8 tpm_established:1;
+   u8 loc_assigned:1;
+   u8 active_locality:3;
+   u8 _reserved:2;
+   u8 tpm_reg_valid_sts:1;
+   };
+   };
+} __packed;
+
+struct tpm_loc_ctrl {
+   union {
+   u32 val;
+   struct {
+   u32 request_access:1;
+   u32 relinquish:1;
+   u32 seize:1;
+   u32 reset_establishment_bit:1;
+   u32 _reserved:28;
+   };
+   };
+} __packed;
+
+struct tpm_loc_sts {
+   union {
+   u32 val;
+   struct {
+   u32 granted:1;
+   u32 beenSeized:1;
+   u32 _reserved:30;
+   };
+   };
+} __packed;
+
+struct tpm_crb_ctrl_req {
+   union {
+   u32 val;
+   struct {
+   u32 cmd_ready:1;
+   u32 go_idle:1;
+   u32 _reserved:30;
+   };
+   };
+} __packed;
+
+struct tpm_crb_ctrl_sts {
+   union {
+   u32 val;
+   struct {
+   u32 tpm_sts:1;
+   u32 tpm_idle:1;
+   u32 _reserved:30;
+   };
+   };
+}

[PATCH 00/13] x86: Trenchboot secure dynamic launch Linux kernel support

2020-09-24 Thread Ross Philipson
The Trenchboot project focus on boot security has led to the enabling of
the Linux kernel to be directly invocable by the x86 Dynamic Launch
instruction(s) for establishing a Dynamic Root of Trust for Measurement
(DRTM). The dynamic launch will be initiated by a boot loader with
associated support added to it, for example the first targeted boot
loader will be GRUB2. An integral part of establishing the DRTM involves
measuring everything that is intended to be run (kernel image, initrd,
etc) and everything that will configure that kernel to run (command
line, boot params, etc) into specific PCRs, the DRTM PCRs (17-22), in
the TPM. Another key aspect is the dynamic launch is rooted in hardware,
that is to say the hardware (CPU) is what takes the first measurement
for the chain of integrity measurements. On Intel this is done using
the GETSEC instruction provided by Intel's TXT and the SKINIT
instruction provided by AMD's AMD-V. Information on these technologies
can be readily found online. This patchset introduces Intel TXT support.

To enable the kernel to be launched by GETSEC, a stub must be built
into the setup section of the compressed kernel to handle the specific
state that the dynamic launch process leaves the BSP in. This is
analogous to the EFI stub that is found in the same area. Also this stub
must measure everything that is going to be used as early as possible.
This stub code and subsequent code must also deal with the specific
state that the dynamic launch leaves the APs in.

A quick note on terminology. The larger open source project itself is
called Trenchboot, which is hosted on Github (links below). The kernel
feature enabling the use of the x86 technology is referred to as "Secure
Launch" within the kernel code. As such the prefixes sl_/SL_ or
slaunch/SLAUNCH will be seen in the code. The stub code discussed above
is referred to as the SL stub.

The basic flow is:

 - Entry from the dynamic launch jumps to the SL stub
 - SL stub fixes up the world on the BSP
 - For TXT, SL stub wakes the APs, fixes up their worlds
 - For TXT, APs are left halted waiting for an NMI to wake them
 - SL stub jumps to startup_32
 - SL main runs to measure configuration and module information into the
   DRTM PCRs. It also locates the TPM event log.
 - Kernel boot proceeds normally from this point.
 - During early setup, slaunch_setup() runs to finish some validation
   and setup tasks.
 - The SMP bringup code is modified to wake the waiting APs. APs vector
   to rmpiggy and start up normally from that point.
 - Kernel boot finishes booting normally
 - SL securityfs module is present to allow reading and writing of the
   TPM event log.
 - SEXIT support to leave SMX mode is present on the kexec path and
   the various reboot paths (poweroff, reset, halt).

Links:

The Trenchboot project including documentation:

https://github.com/trenchboot

Intel TXT is documented in its own specification and in the SDM Instruction Set 
volume:

https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
https://software.intel.com/en-us/articles/intel-sdm

AMD SKINIT is documented in the System Programming manual:

https://www.amd.com/system/files/TechDocs/24593.pdf

GRUB2 pre-launch support patchset (WIP):

https://lists.gnu.org/archive/html/grub-devel/2020-05/msg00011.html

Thanks
Ross Philipson and Daniel P. Smith

Daniel P. Smith (4):
  x86: Add early TPM TIS/CRB interface support for Secure Launch
  x86: Add early TPM1.2/TPM2.0 interface support for Secure Launch
  x86: Add early general TPM interface support for Secure Launch
  x86: Secure Launch adding event log securityfs

Ross Philipson (9):
  x86: Secure Launch Kconfig
  x86: Secure Launch main header file
  x86: Add early SHA support for Secure Launch early measurements
  x86: Secure Launch kernel early boot stub
  x86: Secure Launch kernel late boot stub
  x86: Secure Launch SMP bringup support
  kexec: Secure Launch kexec SEXIT support
  reboot: Secure Launch SEXIT support on reboot paths
  tpm: Allow locality 2 to be set when initializing the TPM for Secure
Launch

 Documentation/x86/boot.rst|   9 +
 arch/x86/Kconfig  |  36 ++
 arch/x86/boot/compressed/Makefile |   8 +
 arch/x86/boot/compressed/early_sha1.c | 104 
 arch/x86/boot/compressed/early_sha1.h |  17 +
 arch/x86/boot/compressed/early_sha256.c   |   6 +
 arch/x86/boot/compressed/early_sha512.c   |   6 +
 arch/x86/boot/compressed/head_64.S|  34 +
 arch/x86/boot/compressed/kernel_info.S|   7 +
 arch/x86/boot/compressed/sl_main.c| 390 
 arch/x86/boot/compressed/sl_stub.S| 606 ++
 arch/x86/boot/compressed/tpm/crb.c| 304 +
 arch/x86/boot/compressed/tpm/crb.h|  20 +
 arch/x86/boot/compressed/tpm/tis.c| 215 +++
 arch/x86/boot/compressed

[PATCH 09/13] x86: Secure Launch SMP bringup support

2020-09-24 Thread Ross Philipson
On Intel, the APs are left in a well documented state after TXT performs
the late launch. Specifically they cannot have #INIT asserted on them so
a standard startup via INIT/SIPI/SIPI cannot be performed. Instead the
early SL stub code parked the APs in a pause/jmp loop waiting for an NMI.
The modified SMP boot code is called for the Secure Launch case. The
jump address for the RM piggy entry point is fixed up in the jump where
the APs are waiting and an NMI IPI is sent to the AP. The AP vectors to
the Secure Launch entry point in the RM piggy which mimics what the real
mode code would do then jumps the the standard RM piggy protected mode
entry point.

Signed-off-by: Ross Philipson 
---
 arch/x86/include/asm/realmode.h  |  3 ++
 arch/x86/kernel/smpboot.c| 86 
 arch/x86/realmode/rm/header.S|  3 ++
 arch/x86/realmode/rm/trampoline_64.S | 37 
 4 files changed, 129 insertions(+)

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index b35030e..20bc283 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -34,6 +34,9 @@ struct real_mode_header {
 #ifdef CONFIG_X86_64
u32 machine_real_restart_seg;
 #endif
+#ifdef CONFIG_SECURE_LAUNCH
+   u32 sl_trampoline_start32;
+#endif
 };
 
 /* This must match data at realmode/rm/trampoline_{32,64}.S */
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index f5ef689..0ca0b07 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -56,6 +56,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1017,6 +1018,83 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
return 0;
 }
 
+#ifdef CONFIG_SECURE_LAUNCH
+
+static atomic_t first_ap_only = {1};
+
+/*
+ * Called to fix the long jump address for the waiting APs to vector to
+ * the correct startup location in the Secure Launch stub in the rmpiggy.
+ */
+static int
+slaunch_fixup_jump_vector(void)
+{
+   struct sl_ap_wake_info *ap_wake_info;
+   unsigned int *ap_jmp_ptr = 0;
+
+   if (!atomic_dec_and_test(_ap_only))
+   return 0;
+
+   ap_wake_info = slaunch_get_ap_wake_info();
+
+   ap_jmp_ptr = (unsigned int *)__va(ap_wake_info->ap_wake_block +
+ ap_wake_info->ap_jmp_offset);
+
+   *ap_jmp_ptr = real_mode_header->sl_trampoline_start32;
+
+   pr_info("TXT AP long jump address updated\n");
+
+   return 0;
+}
+
+/*
+ * TXT AP startup is quite different than normal. The APs cannot have #INIT
+ * asserted on them or receive SIPIs. The early Secure Launch code has parked
+ * the APs in a pause loop waiting to receive an NMI. This will wake the APs
+ * and have them jump to the protected mode code in the rmpiggy where the rest
+ * of the SMP boot of the AP will proceed normally.
+ */
+static int
+slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+   unsigned long send_status = 0, accept_status = 0;
+
+   /* Only done once */
+   if (slaunch_fixup_jump_vector())
+   return -1;
+
+   /* Send NMI IPI to idling AP and wake it up */
+   apic_icr_write(APIC_DM_NMI, apicid);
+
+   if (init_udelay == 0)
+   udelay(10);
+   else
+   udelay(300);
+
+   send_status = safe_apic_wait_icr_idle();
+
+   if (init_udelay == 0)
+   udelay(10);
+   else
+   udelay(300);
+
+   accept_status = (apic_read(APIC_ESR) & 0xEF);
+
+   if (send_status)
+   pr_err("Secure Launch IPI never delivered???\n");
+   if (accept_status)
+   pr_err("Secure Launch IPI delivery error (%lx)\n",
+   accept_status);
+
+   return (send_status | accept_status);
+}
+
+#else
+
+#define slaunch_wakeup_cpu_from_txt(cpu, apicid)   0
+
+#endif  /* !CONFIG_SECURE_LAUNCH */
+
 /*
  * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
  * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
@@ -1071,6 +1149,12 @@ static int do_boot_cpu(int apicid, int cpu, struct 
task_struct *idle,
cpumask_clear_cpu(cpu, cpu_initialized_mask);
smp_mb();
 
+   /* With Intel TXT, the AP startup is totally different */
+   if (slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) {
+   boot_error = slaunch_wakeup_cpu_from_txt(cpu, apicid);
+   goto txt_wake;
+   }
+
/*
 * Wake up a CPU in difference cases:
 * - Use the method in the APIC driver if it's defined
@@ -1083,6 +1167,8 @@ static int do_boot_cpu(int apicid, int cpu, struct 
task_struct *idle,
boot_error = wakeup_cpu_via_init_nmi(cpu, start_ip, apicid,
 cpu0_nmi_registered);
 
+txt_wake:
+
if (!boot_error) {
/*
 

[PATCH 10/13] x86: Secure Launch adding event log securityfs

2020-09-24 Thread Ross Philipson
From: "Daniel P. Smith" 

The late init functionality registers securityfs nodes to allow access
to TXT register fields on Intel along with the fetching of and writing
events to the late launch TPM log.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
Signed-off-by: garnetgrimm 
---
 arch/x86/kernel/slaunch.c | 293 +-
 1 file changed, 292 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index e040e32..7bdb89e 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -8,7 +8,7 @@
  *
  * Author(s):
  * Daniel P. Smith 
- *
+ * Garnet T. Grimm 
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
@@ -493,3 +493,294 @@ void __init slaunch_setup(void)
vendor[3] == INTEL_CPUID_MFGID_EDX)
slaunch_setup_intel();
 }
+
+#define SL_FS_ENTRIES  10
+/* root directory node must be last */
+#define SL_ROOT_DIR_ENTRY  (SL_FS_ENTRIES - 1)
+#define SL_TXT_DIR_ENTRY   (SL_FS_ENTRIES - 2)
+#define SL_TXT_FILE_FIRST  (SL_TXT_DIR_ENTRY - 1)
+#define SL_TXT_ENTRY_COUNT 7
+
+#define DECLARE_TXT_PUB_READ_U(size, fmt, msg_size)\
+static ssize_t txt_pub_read_u##size(unsigned int offset,   \
+   loff_t *read_offset,\
+   size_t read_len,\
+   char __user *buf)   \
+{  \
+   void __iomem *txt;  \
+   char msg_buffer[msg_size];  \
+   u##size reg_value = 0;  \
+   txt = ioremap(TXT_PUB_CONFIG_REGS_BASE, \
+   TXT_NR_CONFIG_PAGES * PAGE_SIZE);   \
+   if (IS_ERR(txt))\
+   return PTR_ERR(txt);\
+   memcpy_fromio(_value, txt + offset, sizeof(u##size));   \
+   iounmap(txt);   \
+   snprintf(msg_buffer, msg_size, fmt, reg_value); \
+   return simple_read_from_buffer(buf, read_len, read_offset,  \
+   _buffer, msg_size); \
+}
+
+DECLARE_TXT_PUB_READ_U(8, "%#04x\n", 6);
+DECLARE_TXT_PUB_READ_U(32, "%#010x\n", 12);
+DECLARE_TXT_PUB_READ_U(64, "%#018llx\n", 20);
+
+#define DECLARE_TXT_FOPS(reg_name, reg_offset, reg_size)   \
+static ssize_t txt_##reg_name##_read(struct file *flip,
\
+   char __user *buf, size_t read_len, loff_t *read_offset) \
+{  \
+   return txt_pub_read_u##reg_size(reg_offset, read_offset,\
+   read_len, buf); \
+}  \
+static const struct file_operations reg_name##_ops = { \
+   .read = txt_##reg_name##_read,  \
+}
+
+DECLARE_TXT_FOPS(sts, TXT_CR_STS, 64);
+DECLARE_TXT_FOPS(ests, TXT_CR_ESTS, 8);
+DECLARE_TXT_FOPS(errorcode, TXT_CR_ERRORCODE, 32);
+DECLARE_TXT_FOPS(didvid, TXT_CR_DIDVID, 64);
+DECLARE_TXT_FOPS(e2sts, TXT_CR_E2STS, 64);
+DECLARE_TXT_FOPS(ver_emif, TXT_CR_VER_EMIF, 32);
+DECLARE_TXT_FOPS(scratchpad, TXT_CR_SCRATCHPAD, 64);
+
+/*
+ * Securityfs exposure
+ */
+struct memfile {
+   char *name;
+   void *addr;
+   size_t size;
+};
+
+static struct memfile sl_evtlog = {"eventlog", 0, 0};
+static void *txt_heap;
+static struct txt_heap_event_log_pointer2_1_element __iomem *evtlog20;
+static DEFINE_MUTEX(sl_evt_log_mutex);
+
+static ssize_t sl_evtlog_read(struct file *file, char __user *buf,
+ size_t count, loff_t *pos)
+{
+   ssize_t size;
+
+   if (!sl_evtlog.addr)
+   return 0;
+
+   mutex_lock(_evt_log_mutex);
+   size = simple_read_from_buffer(buf, count, pos, sl_evtlog.addr,
+  sl_evtlog.size);
+   mutex_unlock(_evt_log_mutex);
+
+   return size;
+}
+
+static ssize_t sl_evtlog_write(struct file *file, const char __user *buf,
+   size_t datalen, loff_t *ppos)
+{
+   char *data;
+   ssize_t result;
+
+   if (!sl_evtlog.addr)
+   return 0;
+
+   /* No partial writes. */
+   result = -EINVAL;
+   if (*ppos != 0)
+   goto out;
+
+   data = memdup_user(buf, datalen);
+   if (IS_ERR(data)) {
+   result = PTR_ERR(data);
+   goto out;
+   }
+
+   mutex_lock(_evt_log_mutex);
+   if (evtlog20)
+

[PATCH 07/13] x86: Secure Launch kernel early boot stub

2020-09-24 Thread Ross Philipson
The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
 Documentation/x86/boot.rst |   9 +
 arch/x86/boot/compressed/Makefile  |   1 +
 arch/x86/boot/compressed/head_64.S |  34 ++
 arch/x86/boot/compressed/kernel_info.S |   7 +
 arch/x86/boot/compressed/sl_main.c | 390 +
 arch/x86/boot/compressed/sl_stub.S | 606 +
 arch/x86/kernel/asm-offsets.c  |  16 +
 7 files changed, 1063 insertions(+)
 create mode 100644 arch/x86/boot/compressed/sl_main.c
 create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/x86/boot.rst b/Documentation/x86/boot.rst
index 7fafc7a..7232801 100644
--- a/Documentation/x86/boot.rst
+++ b/Documentation/x86/boot.rst
@@ -1026,6 +1026,15 @@ Offset/size: 0x000c/4
 
   This field contains maximal allowed type for setup_data and setup_indirect 
structs.
 
+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT and AMD SKINIT.
+
 
 The Image Checksum
 ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 35947b9..d881ff7 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -102,6 +102,7 @@ vmlinux-objs-$(CONFIG_SECURE_LAUNCH_SHA512) += 
$(obj)/early_sha512.o
 vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/tpm/tpmio.o 
$(obj)/tpm/tpm_buff.o \
$(obj)/tpm/tis.o $(obj)/tpm/crb.o $(obj)/tpm/tpm1_cmds.o \
$(obj)/tpm/tpm2_cmds.o $(obj)/tpm/tpm2_auth.o $(obj)/tpm/tpm.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/sl_main.o $(obj)/sl_stub.o
 
 # The compressed kernel is built with -fPIC/-fPIE so that a boot loader
 # can place it anywhere in memory and it will still run. However, since
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 97d37f0..42043bf 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -279,6 +279,21 @@ SYM_INNER_LABEL(efi32_pe_stub_entry, SYM_L_LOCAL)
 SYM_FUNC_END(efi32_stub_entry)
 #endif
 
+#ifdef CONFIG_SECURE_LAUNCH
+SYM_FUNC_START(sl_stub_entry)
+   /*
+* On entry, %ebx has the entry abs offset to sl_stub_entry. To
+* find the beginning of where we are loaded, sub off from the
+* beginning.
+*/
+   leal(startup_32 - sl_stub_entry)(%ebx), %ebx
+
+   /* More room to work in sl_stub in the text section */
+   jmp sl_stub
+
+SYM_FUNC_END(sl_stub_entry)
+#endif
+
.code64
.org 0x200
 SYM_CODE_START(startup_64)
@@ -537,6 +552,25 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
shrq$3, %rcx
rep stosq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   /*
+* Have to do the final early sl stub work in 64b area.
+*
+* *** NOTE ***
+*
+* Several boot params get used before we get a chance to measure
+* them in this call. This is a known issue and we currently don't
+* have a solution. The scratch field doesn't matter and loadflags
+* have KEEP_SEGMENTS set by the stub code. There is no obvious way
+* to do anything about the use of kernel_alignment or init_size
+* though these seem low risk.
+*/
+   pushq   %rsi
+   movq%rsi, %rdi
+   callq   sl_main
+   popq%rsi
+#endif
+
 /*
  * Do the extraction, and jump to the new kernel..
  */
diff --git a/arch/x86/boot/compressed/kernel_info.S 
b/arch/x86/boot/compressed/kernel_info.S
index f818ee8..192d557 100644
--- a/arch/x86/boot/compressed/kernel_info.S
+++ b/arch/x86/boot/compressed/kernel_info.S
@@ -17,6 +17,13 @@ kernel_info:
/* Maximal allowed type for setup_data and setup_indirect structs. */
.long   SETUP_TYPE_MAX
 
+   /* Offset to the MLE header structure */
+#ifdef CONFIG_SECURE_LAUNCH
+   .long   mle_header

[PATCH 03/13] x86: Add early SHA support for Secure Launch early measurements

2020-09-24 Thread Ross Philipson
The SHA algorithms are necessary to measure configuration information into
the TPM as early as possible before using the values. This implementation
uses the established approach of #including the SHA libraries directly in
the code since the compressed kernel is not uncompressed at this point.

The SHA code here has its origins in the code from the main kernel. That
code could not be pulled directly into the setup portion of the compressed
kernel because of other dependencies it pulls in. The result is this is a
modified copy of that code that still leverages the core SHA algorithms.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile   |   4 +
 arch/x86/boot/compressed/early_sha1.c   | 104 
 arch/x86/boot/compressed/early_sha1.h   |  17 +++
 arch/x86/boot/compressed/early_sha256.c |   6 +
 arch/x86/boot/compressed/early_sha512.c |   6 +
 include/linux/sha512.h  |  21 
 lib/sha1.c  |   4 +
 lib/sha512.c| 209 
 8 files changed, 371 insertions(+)
 create mode 100644 arch/x86/boot/compressed/early_sha1.c
 create mode 100644 arch/x86/boot/compressed/early_sha1.h
 create mode 100644 arch/x86/boot/compressed/early_sha256.c
 create mode 100644 arch/x86/boot/compressed/early_sha512.c
 create mode 100644 include/linux/sha512.h
 create mode 100644 lib/sha512.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index ff7894f..0fd84b9 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -96,6 +96,10 @@ vmlinux-objs-$(CONFIG_ACPI) += $(obj)/acpi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o
 efi-obj-$(CONFIG_EFI_STUB) = $(objtree)/drivers/firmware/efi/libstub/lib.a
 
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH_SHA256) += $(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH_SHA512) += $(obj)/early_sha512.o
+
 # The compressed kernel is built with -fPIC/-fPIE so that a boot loader
 # can place it anywhere in memory and it will still run. However, since
 # it is executed as-is without any ELF relocation processing performed
diff --git a/arch/x86/boot/compressed/early_sha1.c 
b/arch/x86/boot/compressed/early_sha1.c
new file mode 100644
index 000..198c46d
--- /dev/null
+++ b/arch/x86/boot/compressed/early_sha1.c
@@ -0,0 +1,104 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2020, Oracle and/or its affiliates.
+ * Copyright (c) 2020 Apertus Solutions, LLC.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "early_sha1.h"
+
+#define SHA1_DISABLE_EXPORT
+#include "../../../../lib/sha1.c"
+
+/* The SHA1 implementation in lib/sha1.c was written to get the workspace
+ * buffer as a parameter. This wrapper function provides a container
+ * around a temporary workspace that is cleared after the transform completes.
+ */
+static void __sha_transform(u32 *digest, const char *data)
+{
+   u32 ws[SHA1_WORKSPACE_WORDS];
+
+   sha1_transform(digest, data, ws);
+
+   memset(ws, 0, sizeof(ws));
+   /*
+* As this is cryptographic code, prevent the memset 0 from being
+* optimized out potentially leaving secrets in memory.
+*/
+   wmb();
+
+}
+
+void early_sha1_init(struct sha1_state *sctx)
+{
+   sha1_init(sctx->state);
+   sctx->count = 0;
+}
+
+void early_sha1_update(struct sha1_state *sctx,
+  const u8 *data,
+  unsigned int len)
+{
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+   sctx->count += len;
+
+   if (likely((partial + len) >= SHA1_BLOCK_SIZE)) {
+   int blocks;
+
+   if (partial) {
+   int p = SHA1_BLOCK_SIZE - partial;
+
+   memcpy(sctx->buffer + partial, data, p);
+   data += p;
+   len -= p;
+
+   __sha_transform(sctx->state, sctx->buffer);
+   }
+
+   blocks = len / SHA1_BLOCK_SIZE;
+   len %= SHA1_BLOCK_SIZE;
+
+   if (blocks) {
+   while (blocks--) {
+   __sha_transform(sctx->state, data);
+   data += SHA1_BLOCK_SIZE;
+   }
+   }
+   partial = 0;
+   }
+
+   if (len)
+   memcpy(sctx->buffer + partial, data, len);
+}
+
+void early_sha1_final(struct sha1_state *sctx, u8 *out)
+{
+   const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
+   __be64 *bits = (__be64 *)(sctx->buffer + bit_offset);
+   __be32 *digest = (__be32 *)out;
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+   int i;
+
+   sctx->buffer[partial++] = 0x80;
+   if (

[PATCH 05/13] x86: Add early TPM1.2/TPM2.0 interface support for Secure Launch

2020-09-24 Thread Ross Philipson
From: "Daniel P. Smith" 

This commit introduces an abstraction for TPM1.2 and TPM2.0 devices
above the TPM hardware interface.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile |   3 +-
 arch/x86/boot/compressed/tpm/tpm1.h   | 112 
 arch/x86/boot/compressed/tpm/tpm1_cmds.c  |  99 ++
 arch/x86/boot/compressed/tpm/tpm2.h   |  89 
 arch/x86/boot/compressed/tpm/tpm2_auth.c  |  44 
 arch/x86/boot/compressed/tpm/tpm2_auth.h  |  21 
 arch/x86/boot/compressed/tpm/tpm2_cmds.c  | 145 ++
 arch/x86/boot/compressed/tpm/tpm2_constants.h |  66 
 8 files changed, 578 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/boot/compressed/tpm/tpm1.h
 create mode 100644 arch/x86/boot/compressed/tpm/tpm1_cmds.c
 create mode 100644 arch/x86/boot/compressed/tpm/tpm2.h
 create mode 100644 arch/x86/boot/compressed/tpm/tpm2_auth.c
 create mode 100644 arch/x86/boot/compressed/tpm/tpm2_auth.h
 create mode 100644 arch/x86/boot/compressed/tpm/tpm2_cmds.c
 create mode 100644 arch/x86/boot/compressed/tpm/tpm2_constants.h

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 5515afa..a4308d5 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -100,7 +100,8 @@ vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o
 vmlinux-objs-$(CONFIG_SECURE_LAUNCH_SHA256) += $(obj)/early_sha256.o
 vmlinux-objs-$(CONFIG_SECURE_LAUNCH_SHA512) += $(obj)/early_sha512.o
 vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/tpm/tpmio.o 
$(obj)/tpm/tpm_buff.o \
-   $(obj)/tpm/tis.o $(obj)/tpm/crb.o
+   $(obj)/tpm/tis.o $(obj)/tpm/crb.o $(obj)/tpm/tpm1_cmds.o \
+   $(obj)/tpm/tpm2_cmds.o $(obj)/tpm/tpm2_auth.o
 
 # The compressed kernel is built with -fPIC/-fPIE so that a boot loader
 # can place it anywhere in memory and it will still run. However, since
diff --git a/arch/x86/boot/compressed/tpm/tpm1.h 
b/arch/x86/boot/compressed/tpm/tpm1.h
new file mode 100644
index 000..498fa45
--- /dev/null
+++ b/arch/x86/boot/compressed/tpm/tpm1.h
@@ -0,0 +1,112 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Copyright (c) 2020 Apertus Solutions, LLC
+ *
+ * Author(s):
+ *  Daniel P. Smith 
+ *
+ * The definitions in this header are extracted from the Trusted Computing
+ * Group's "TPM Main Specification", Parts 1-3.
+ *
+ */
+
+#ifndef _TPM1_H
+#define _TPM1_H
+
+#include "tpm.h"
+
+/* Section 2.2.3 */
+#define TPM_AUTH_DATA_USAGE u8
+#define TPM_PAYLOAD_TYPE u8
+#define TPM_VERSION_BYTE u8
+#define TPM_TAG u16
+#define TPM_PROTOCOL_ID u16
+#define TPM_STARTUP_TYPE u16
+#define TPM_ENC_SCHEME u16
+#define TPM_SIG_SCHEME u16
+#define TPM_MIGRATE_SCHEME u16
+#define TPM_PHYSICAL_PRESENCE u16
+#define TPM_ENTITY_TYPE u16
+#define TPM_KEY_USAGE u16
+#define TPM_EK_TYPE u16
+#define TPM_STRUCTURE_TAG u16
+#define TPM_PLATFORM_SPECIFIC u16
+#define TPM_COMMAND_CODE u32
+#define TPM_CAPABILITY_AREA u32
+#define TPM_KEY_FLAGS u32
+#define TPM_ALGORITHM_ID u32
+#define TPM_MODIFIER_INDICATOR u32
+#define TPM_ACTUAL_COUNT u32
+#define TPM_TRANSPORT_ATTRIBUTES u32
+#define TPM_AUTHHANDLE u32
+#define TPM_DIRINDEX u32
+#define TPM_KEY_HANDLE u32
+#define TPM_PCRINDEX u32
+#define TPM_RESULT u32
+#define TPM_RESOURCE_TYPE u32
+#define TPM_KEY_CONTROL u32
+#define TPM_NV_INDEX u32 The
+#define TPM_FAMILY_ID u32
+#define TPM_FAMILY_VERIFICATION u32
+#define TPM_STARTUP_EFFECTS u32
+#define TPM_SYM_MODE u32
+#define TPM_FAMILY_FLAGS u32
+#define TPM_DELEGATE_INDEX u32
+#define TPM_CMK_DELEGATE u32
+#define TPM_COUNT_ID u32
+#define TPM_REDIT_COMMAND u32
+#define TPM_TRANSHANDLE u32
+#define TPM_HANDLE u32
+#define TPM_FAMILY_OPERATION u32
+
+/* Section 6 */
+#define TPM_TAG_RQU_COMMAND0x00C1
+#define TPM_TAG_RQU_AUTH1_COMMAND  0x00C2
+#define TPM_TAG_RQU_AUTH2_COMMAND  0x00C3
+#define TPM_TAG_RSP_COMMAND0x00C4
+#define TPM_TAG_RSP_AUTH1_COMMAND  0x00C5
+#define TPM_TAG_RSP_AUTH2_COMMAND  0x00C6
+
+/* Section 16 */
+#define TPM_SUCCESS 0x0
+
+/* Section 17 */
+#define TPM_ORD_EXTEND 0x0014
+
+#define SHA1_DIGEST_SIZE 20
+
+/* Section 5.4 */
+struct tpm_sha1_digest {
+   u8 digest[SHA1_DIGEST_SIZE];
+};
+struct tpm_digest {
+   TPM_PCRINDEX pcr;
+   union {
+   struct tpm_sha1_digest sha1;
+   } digest;
+};
+
+#define TPM_DIGEST struct tpm_sha1_digest
+#define TPM_CHOSENID_HASH  TPM_DIGEST
+#define TPM_COMPOSITE_HASH TPM_DIGEST
+#define TPM_DIRVALUE   TPM_DIGEST
+#define TPM_HMAC   TPM_DIGEST
+#define TPM_PCRVALUE   TPM_DIGEST
+#define TPM_AUDITDIGESTTPM_DIGEST
+#define TPM_DAA_TPM_SEED   TPM_DIGEST
+#define TPM_DAA_CONTEXT_SEED   TPM_DIGEST
+
+struct tpm_extend_cmd {
+   TPM_PCRINDEX pcr_num;
+   TP

[PATCH 02/13] x86: Secure Launch main header file

2020-09-24 Thread Ross Philipson
Introduce the main Secure Launch header file used in the early SL stub
and the early setup code.

Signed-off-by: Ross Philipson 
---
 include/linux/slaunch.h | 544 
 1 file changed, 544 insertions(+)
 create mode 100644 include/linux/slaunch.h

diff --git a/include/linux/slaunch.h b/include/linux/slaunch.h
new file mode 100644
index 000..88adc69
--- /dev/null
+++ b/include/linux/slaunch.h
@@ -0,0 +1,544 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Main Secure Launch header file.
+ *
+ * Copyright (c) 2020, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLAUNCH_H
+#define _LINUX_SLAUNCH_H
+
+/*
+ * Secure Launch Defined State Flags
+ */
+#define SL_FLAG_ACTIVE 0x0001
+#define SL_FLAG_ARCH_SKINIT0x0002
+#define SL_FLAG_ARCH_TXT   0x0004
+
+#ifdef CONFIG_SECURE_LAUNCH
+
+/*
+ * Secure Launch main definitions file.
+ *
+ * Copyright (c) 2019 Oracle and/or its affiliates. All rights reserved.
+ */
+
+#define __SL32_CS  0x0008
+#define __SL32_DS  0x0010
+
+#define SL_CPU_AMD 1
+#define SL_CPU_INTEL   2
+
+#define INTEL_CPUID_MFGID_EBX  0x756e6547 /* Genu */
+#define INTEL_CPUID_MFGID_EDX  0x49656e69 /* ineI */
+#define INTEL_CPUID_MFGID_ECX  0x6c65746e /* ntel */
+
+#define AMD_CPUID_MFGID_EBX0x68747541 /* Auth */
+#define AMD_CPUID_MFGID_EDX0x69746e65 /* enti */
+#define AMD_CPUID_MFGID_ECX0x444d4163 /* cAMD */
+
+/*
+ * Intel Safer Mode Extensions (SMX)
+ *
+ * Intel SMX provides a programming interface to establish a Measured Launched
+ * Environment (MLE). The measurement and protection mechanisms supported by 
the
+ * capabilities of an Intel Trusted Execution Technology (TXT) platform. SMX is
+ * the processor’s programming interface in an Intel TXT platform.
+ *
+ * See Intel SDM Volume 2 - 6.1 "Safer Mode Extensions Reference"
+ */
+
+/*
+ * SMX GETSEC Leaf Functions
+ */
+#define SMX_X86_GETSEC_SEXIT   5
+#define SMX_X86_GETSEC_SMCTRL  7
+#define SMX_X86_GETSEC_WAKEUP  8
+
+/*
+ * Intel Trusted Execution Technology MMIO Registers Banks
+ */
+#define TXT_PUB_CONFIG_REGS_BASE   0xfed3
+#define TXT_PRIV_CONFIG_REGS_BASE  0xfed2
+#define TXT_NR_CONFIG_PAGES ((TXT_PUB_CONFIG_REGS_BASE - \
+ TXT_PRIV_CONFIG_REGS_BASE) >> PAGE_SHIFT)
+
+/*
+ * Intel Trusted Execution Technology (TXT) Registers
+ */
+#define TXT_CR_STS 0x
+#define TXT_CR_ESTS0x0008
+#define TXT_CR_ERRORCODE   0x0030
+#define TXT_CR_CMD_RESET   0x0038
+#define TXT_CR_CMD_CLOSE_PRIVATE   0x0048
+#define TXT_CR_DIDVID  0x0110
+#define TXT_CR_VER_EMIF0x0200
+#define TXT_CR_CMD_UNLOCK_MEM_CONFIG   0x0218
+#define TXT_CR_SINIT_BASE  0x0270
+#define TXT_CR_SINIT_SIZE  0x0278
+#define TXT_CR_MLE_JOIN0x0290
+#define TXT_CR_HEAP_BASE   0x0300
+#define TXT_CR_HEAP_SIZE   0x0308
+#define TXT_CR_SCRATCHPAD  0x0378
+#define TXT_CR_CMD_OPEN_LOCALITY1  0x0380
+#define TXT_CR_CMD_CLOSE_LOCALITY1 0x0388
+#define TXT_CR_CMD_OPEN_LOCALITY2  0x0390
+#define TXT_CR_CMD_CLOSE_LOCALITY2 0x0398
+#define TXT_CR_CMD_SECRETS 0x08e0
+#define TXT_CR_CMD_NO_SECRETS  0x08e8
+#define TXT_CR_E2STS   0x08f0
+
+/* TXTCR_STS status bits */
+#define TXT_SENTER_DONE_STS(1<<0)
+#define TXT_SEXIT_DONE_STS (1<<1)
+
+/*
+ * SINIT/MLE Capabilities Field Bit Definitions
+ */
+#define TXT_SINIT_MLE_CAP_WAKE_GETSEC  0
+#define TXT_SINIT_MLE_CAP_WAKE_MONITOR 1
+
+/*
+ * OS/MLE Secure Launch Specific Definitions
+ */
+#define TXT_OS_MLE_STRUCT_VERSION  1
+#define TXT_OS_MLE_MAX_VARIABLE_MTRRS  32
+
+/*
+ * TXT Heap Table Enumeration
+ */
+#define TXT_BIOS_DATA_TABLE1
+#define TXT_OS_MLE_DATA_TABLE  2
+#define TXT_OS_SINIT_DATA_TABLE3
+#define TXT_SINIT_MLE_DATA_TABLE   4
+
+/*
+ * Secure Launch Defined Error Codes used in MLE-initiated TXT resets.
+ *
+ * TXT Specification
+ * Appendix I ACM Error Codes
+ */
+#define SL_ERROR_GENERIC   0xc0008001
+#define SL_ERROR_TPM_INIT  0xc0008002
+#define SL_ERROR_TPM_INVALID_LOG20 0xc0008003
+#define SL_ERROR_TPM_LOGGING_FAILED0xc0008004
+#define SL_ERROR_TPM_GET_LOC   0xc0008005
+#define SL_ERROR_TPM_EXTEND0xc0008006
+#define SL_ERROR_MTRR_INV_VCNT 0xc0008007
+#define SL_ERROR_MTRR_INV_DEF_TYPE 0xc0008008
+#define SL_ERROR_MTRR_INV_BASE 0xc0008009
+#define SL_ERROR_MTRR_INV_MASK 0xc000800a
+#define SL_ERROR_MSR_INV_MISC_EN   0xc000800b
+#define SL_ERROR_INV_AP_INTERRUPT  0xc000800c
+#define SL_ERROR_RESERVE_AP_WAKE   0xc000800d
+#define SL_ERROR_HEAP_WALK 0xc000800e
+#define SL_ERROR_HEAP_MAP  0xc000800f
+#define SL_ERROR_HEAP_MDR_VALS

[PATCH 01/13] x86: Secure Launch Kconfig

2020-09-24 Thread Ross Philipson
Initial bits to bring in Secure Launch functionality. Add Kconfig
options for compiling in/out the Secure Launch code.

Signed-off-by: Ross Philipson 
---
 arch/x86/Kconfig | 36 
 1 file changed, 36 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 7101ac6..8957981 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -1968,6 +1968,42 @@ config EFI_MIXED
 
   If unsure, say N.
 
+config SECURE_LAUNCH
+   bool "Secure Launch support"
+   default n
+   depends on X86_64
+   help
+  The Secure Launch feature allows a kernel to be loaded
+  directly through an Intel TXT measured launch. Intel TXT
+  establishes a Dynamic Root of Trust for Measurement (DRTM)
+  where the CPU measures the kernel image. This feature then
+  continues the measurement chain over kernel configuration
+  information and init images.
+
+choice
+   prompt "Select Secure Launch Algorithm for TPM2"
+   depends on SECURE_LAUNCH
+
+config SECURE_LAUNCH_SHA1
+   bool "Secure Launch TPM1 SHA1"
+   help
+  When using Secure Launch and TPM1 is present, use SHA1 hash
+  algorithm for measurements.
+
+config SECURE_LAUNCH_SHA256
+   bool "Secure Launch TPM2 SHA256"
+   help
+  When using Secure Launch and TPM2 is present, use SHA256 hash
+  algorithm for measurements.
+
+config SECURE_LAUNCH_SHA512
+   bool "Secure Launch TPM2 SHA512"
+   help
+  When using Secure Launch and TPM2 is present, use SHA512 hash
+  algorithm for measurements.
+
+endchoice
+
 config SECCOMP
def_bool y
prompt "Enable seccomp to safely compute untrusted bytecode"
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 12/13] reboot: Secure Launch SEXIT support on reboot paths

2020-09-24 Thread Ross Philipson
If the MLE kernel is being powered off, rebooted or halted,
then SEXIT must be called. Note that the SEXIT GETSEC leaf
can only be called after a machine_shutdown() has been done on
these paths. The machine_shutdown() is not called on a few paths
like when poweroff action does not have a poweroff callback (into
ACPI code) or when an emergency reset is done. In these cases,
just the TXT registers are finalized but SEXIT is skipped.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/reboot.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index a515e2d..90d0647 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -12,6 +12,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -732,6 +733,7 @@ static void native_machine_restart(char *__unused)
 
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
__machine_emergency_restart(0);
 }
 
@@ -742,6 +744,9 @@ static void native_machine_halt(void)
 
tboot_shutdown(TB_SHUTDOWN_HALT);
 
+   /* SEXIT done after machine_shutdown() to meet TXT requirements */
+   slaunch_finalize(1);
+
stop_this_cpu(NULL);
 }
 
@@ -750,8 +755,12 @@ static void native_machine_power_off(void)
if (pm_power_off) {
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
pm_power_off();
+   } else {
+   slaunch_finalize(0);
}
+
/* A fallback in case there is no PM info available */
tboot_shutdown(TB_SHUTDOWN_HALT);
 }
@@ -779,6 +788,7 @@ void machine_shutdown(void)
 
 void machine_emergency_restart(void)
 {
+   slaunch_finalize(0);
__machine_emergency_restart(1);
 }
 
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 06/13] x86: Add early general TPM interface support for Secure Launch

2020-09-24 Thread Ross Philipson
From: "Daniel P. Smith" 

This commit exposes a minimal general interface for the compressed
kernel to request the required TPM operations to send measurements to
a TPM.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile  |   2 +-
 arch/x86/boot/compressed/tpm/tpm.c | 145 +
 2 files changed, 146 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/boot/compressed/tpm/tpm.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index a4308d5..35947b9 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -101,7 +101,7 @@ vmlinux-objs-$(CONFIG_SECURE_LAUNCH_SHA256) += 
$(obj)/early_sha256.o
 vmlinux-objs-$(CONFIG_SECURE_LAUNCH_SHA512) += $(obj)/early_sha512.o
 vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/tpm/tpmio.o 
$(obj)/tpm/tpm_buff.o \
$(obj)/tpm/tis.o $(obj)/tpm/crb.o $(obj)/tpm/tpm1_cmds.o \
-   $(obj)/tpm/tpm2_cmds.o $(obj)/tpm/tpm2_auth.o
+   $(obj)/tpm/tpm2_cmds.o $(obj)/tpm/tpm2_auth.o $(obj)/tpm/tpm.o
 
 # The compressed kernel is built with -fPIC/-fPIE so that a boot loader
 # can place it anywhere in memory and it will still run. However, since
diff --git a/arch/x86/boot/compressed/tpm/tpm.c 
b/arch/x86/boot/compressed/tpm/tpm.c
new file mode 100644
index 000..0fe62d2
--- /dev/null
+++ b/arch/x86/boot/compressed/tpm/tpm.c
@@ -0,0 +1,145 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2020 Apertus Solutions, LLC
+ *
+ * Author(s):
+ *  Daniel P. Smith 
+ *
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include "tpm.h"
+#include "tpmbuff.h"
+#include "tis.h"
+#include "crb.h"
+#include "tpm_common.h"
+#include "tpm1.h"
+#include "tpm2.h"
+#include "tpm2_constants.h"
+
+static struct tpm tpm;
+
+static void find_interface_and_family(struct tpm *t)
+{
+   struct tpm_interface_id intf_id;
+   struct tpm_intf_capability intf_cap;
+
+   /* Sort out whether if it is 1.2 */
+   intf_cap.val = tpm_read32(TPM_INTF_CAPABILITY_0);
+   if ((intf_cap.interface_version == TPM12_TIS_INTF_12) ||
+   (intf_cap.interface_version == TPM12_TIS_INTF_13)) {
+   t->family = TPM12;
+   t->intf = TPM_TIS;
+   return;
+   }
+
+   /* Assume that it is 2.0 and TIS */
+   t->family = TPM20;
+   t->intf = TPM_TIS;
+
+   /* Check if the interface is CRB */
+   intf_id.val = tpm_read32(TPM_INTERFACE_ID_0);
+   if (intf_id.interface_type == TPM_CRB_INTF_ACTIVE)
+   t->intf = TPM_CRB;
+}
+
+struct tpm *enable_tpm(void)
+{
+   struct tpm *t = 
+
+   find_interface_and_family(t);
+
+   switch (t->intf) {
+   case TPM_TIS:
+   if (!tis_init(t))
+   return NULL;
+   break;
+   case TPM_CRB:
+   if (!crb_init(t))
+   return NULL;
+   break;
+   }
+
+   return t;
+}
+
+u8 tpm_request_locality(struct tpm *t, u8 l)
+{
+   u8 ret = TPM_NO_LOCALITY;
+
+   ret = t->ops.request_locality(l);
+
+   if (ret < TPM_MAX_LOCALITY)
+   t->buff = alloc_tpmbuff(t->intf, ret);
+
+   return ret;
+}
+
+void tpm_relinquish_locality(struct tpm *t)
+{
+   t->ops.relinquish_locality();
+
+   free_tpmbuff(t->buff, t->intf);
+}
+
+#define MAX_TPM_EXTEND_SIZE 70 /* TPM2 SHA512 is the largest */
+int tpm_extend_pcr(struct tpm *t, u32 pcr, u16 algo,
+   u8 *digest)
+{
+   int ret = 0;
+
+   if (t->buff == NULL)
+   return -EINVAL;
+
+   if (t->family == TPM12) {
+   struct tpm_digest d;
+
+   if (algo != TPM_ALG_SHA1)
+   return -EINVAL;
+
+   d.pcr = pcr;
+   memcpy((void *)d.digest.sha1.digest,
+   digest, SHA1_DIGEST_SIZE);
+
+   ret = tpm1_pcr_extend(t, );
+   } else if (t->family == TPM20) {
+   struct tpml_digest_values *d;
+   u8 buf[MAX_TPM_EXTEND_SIZE];
+
+   d = (struct tpml_digest_values *) buf;
+   d->count = 1;
+   d->digests->alg = algo;
+   switch (algo) {
+   case TPM_ALG_SHA1:
+   memcpy(d->digests->digest, digest, SHA1_SIZE);
+   break;
+   case TPM_ALG_SHA256:
+   memcpy(d->digests->digest, digest, SHA256_SIZE);
+   break;
+   case TPM_ALG_SHA384:
+   memcpy(d->digests->digest, digest, SHA384_SIZE);
+   break;
+   case TPM_ALG_SHA512:
+   memcpy(d->digests->digest, digest, SHA512_SIZE);
+   b

[PATCH 11/13] kexec: Secure Launch kexec SEXIT support

2020-09-24 Thread Ross Philipson
Prior to running the next kernel via kexec, the Secure Launch code
closes down private SMX resources and does an SEXIT. This allows the
next kernel to start normally without any issues starting the APs etc.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/slaunch.c | 70 +++
 kernel/kexec_core.c   |  4 +++
 2 files changed, 74 insertions(+)

diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index 7bdb89e..b2221d8 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -784,3 +784,73 @@ static void __exit slaunch_exit(void)
 late_initcall(slaunch_late_init);
 
 __exitcall(slaunch_exit);
+
+static inline void smx_getsec_sexit(void)
+{
+   asm volatile (".byte 0x0f,0x37\n"
+ : : "a" (SMX_X86_GETSEC_SEXIT));
+}
+
+void slaunch_finalize(int do_sexit)
+{
+   void __iomem *config;
+   u64 one = 1, val;
+
+   if (!(slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)))
+   return;
+
+   config = ioremap(TXT_PRIV_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT private reqs\n");
+   return;
+   }
+
+   /* Clear secrets bit for SEXIT */
+   memcpy_toio(config + TXT_CR_CMD_NO_SECRETS, , sizeof(u64));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(u64));
+
+   /* Unlock memory configurations */
+   memcpy_toio(config + TXT_CR_CMD_UNLOCK_MEM_CONFIG, , sizeof(u64));
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(u64));
+
+   /* Close the TXT private register space */
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(u64));
+   memcpy_toio(config + TXT_CR_CMD_CLOSE_PRIVATE, , sizeof(u64));
+
+   /*
+* Calls to iounmap are not being done because of the state of the
+* system this late in the kexec process. Local IRQs are disabled and
+* iounmap causes a TLB flush which in turn causes a warning. Leaving
+* thse mappings is not an issue since the next kernel is going to
+* completely re-setup memory management.
+*/
+
+   /* Map public registers and do a final read fence */
+   config = ioremap(TXT_PUB_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT public reqs\n");
+   return;
+   }
+
+   memcpy_fromio(, config + TXT_CR_E2STS, sizeof(u64));
+
+   pr_emerg("TXT clear secrets bit and unlock memory complete.");
+
+   if (!do_sexit)
+   return;
+
+   if (smp_processor_id() != 0) {
+   pr_emerg("Error TXT SEXIT must be called on CPU 0\n");
+   return;
+   }
+
+   /* Disable SMX mode */
+   cr4_set_bits(X86_CR4_SMXE);
+
+   /* Do the SEXIT SMX operation */
+   smx_getsec_sexit();
+
+   pr_emerg("TXT SEXIT complete.");
+}
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index c19c0da..6b9ac11 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -37,6 +37,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1179,6 +1180,9 @@ int kernel_kexec(void)
cpu_hotplug_enable();
pr_notice("Starting new kernel\n");
machine_shutdown();
+
+   /* Finalize TXT registers and do SEXIT */
+   slaunch_finalize(1);
}
 
machine_kexec(kexec_image);
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 13/13] tpm: Allow locality 2 to be set when initializing the TPM for Secure Launch

2020-09-24 Thread Ross Philipson
The Secure Launch MLE environment uses PCRs that are only accessible from
the DRTM locality 2. By default the TPM drivers always initialize the
locality to 0. When a Secure Launch is in progress, initialize the
locality to 2.

Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm-chip.c | 13 +++--
 1 file changed, 11 insertions(+), 2 deletions(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index ddaeceb..f35faab 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -23,6 +23,7 @@
 #include 
 #include 
 #include 
+#include 
 #include "tpm.h"
 
 DEFINE_IDR(dev_nums_idr);
@@ -34,12 +35,20 @@
 
 static int tpm_request_locality(struct tpm_chip *chip)
 {
-   int rc;
+   int rc, locality;
 
if (!chip->ops->request_locality)
return 0;
 
-   rc = chip->ops->request_locality(chip, 0);
+   if (slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) {
+   dev_dbg(>dev, "setting TPM locality to 2 for MLE\n");
+   locality = 2;
+   } else {
+   dev_dbg(>dev, "setting TPM locality to 0\n");
+   locality = 0;
+   }
+
+   rc = chip->ops->request_locality(chip, locality);
if (rc < 0)
return rc;
 
-- 
1.8.3.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu