[PATCH v11 02/20] x86: Secure Launch Kconfig

2024-09-13 Thread Ross Philipson
Initial bits to bring in Secure Launch functionality. Add Kconfig
options for compiling in/out the Secure Launch code.

Signed-off-by: Ross Philipson 
---
 arch/x86/Kconfig | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 007bab9f2a0e..24df5f468fdc 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2056,6 +2056,17 @@ config EFI_RUNTIME_MAP
 
  See also Documentation/ABI/testing/sysfs-firmware-efi-runtime-map.
 
+config SECURE_LAUNCH
+   bool "Secure Launch support"
+   depends on X86_64 && X86_X2APIC && TCG_TPM && CRYPTO_LIB_SHA1 && 
CRYPTO_LIB_SHA256
+   help
+  The Secure Launch feature allows a kernel to be loaded
+  directly through an Intel TXT measured launch. Intel TXT
+  establishes a Dynamic Root of Trust for Measurement (DRTM)
+  where the CPU measures the kernel image. This feature then
+  continues the measurement chain over kernel configuration
+  information and init images.
+
 source "kernel/Kconfig.hz"
 
 config ARCH_SUPPORTS_KEXEC
-- 
2.39.3




[PATCH v11 00/20] x86: Trenchboot secure dynamic launch Linux kernel support

2024-09-13 Thread Ross Philipson
The larger focus of the TrenchBoot project (https://github.com/TrenchBoot) is to
enhance the boot security and integrity in a unified manner. The first area of
focus has been on the Trusted Computing Group's Dynamic Launch for establishing
a hardware Root of Trust for Measurement, also know as DRTM (Dynamic Root of
Trust for Measurement). The project has been and continues to work on providing
a unified means to Dynamic Launch that is a cross-platform (Intel and AMD) and
cross-architecture (x86 and Arm), with our recent involvment in the upcoming
Arm DRTM specification. The order of introducing DRTM to the Linux kernel
follows the maturity of DRTM in the architectures. Intel's Trusted eXecution
Technology (TXT) is present today and only requires a preamble loader, e.g. a
boot loader, and an OS kernel that is TXT-aware. AMD DRTM implementation has
been present since the introduction of AMD-V but requires an additional
component that is AMD specific and referred to in the specification as the
Secure Loader, which the TrenchBoot project has an active prototype in
development. Finally Arm's implementation is in specification development stage
and the project is looking to support it when it becomes available.

This patchset provides detailed documentation of DRTM, the approach used for
adding the capbility, and relevant API/ABI documentation. In addition to the
documentation the patch set introduces Intel TXT support as the first platform
for Linux Secure Launch.

A quick note on terminology. The larger open source project itself is called
TrenchBoot, which is hosted on Github (links below). The kernel feature enabling
the use of Dynamic Launch technology is referred to as "Secure Launch" within
the kernel code. As such the prefixes sl_/SL_ or slaunch/SLAUNCH will be seen
in the code. The stub code discussed above is referred to as the SL stub.

Links:

The TrenchBoot project including documentation:

https://trenchboot.org

The TrenchBoot project on Github:

https://github.com/trenchboot

Intel TXT is documented in its own specification and in the SDM Instruction Set 
volume:

https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
https://software.intel.com/en-us/articles/intel-sdm

AMD SKINIT is documented in the System Programming manual:

https://www.amd.com/system/files/TechDocs/24593.pdf

The TrenchBoot project provides a quick start guide to help get a system
up and running with Secure Launch for Linux:

https://github.com/TrenchBoot/documentation/blob/master/QUICKSTART.md

Patch set based on commit:

torvalds/master/77f587896757708780a7e8792efe62939f25a5ab

Thanks
Ross Philipson and Daniel P. Smith

Changes in v2:

 - Modified 32b entry code to prevent causing relocations in the compressed
   kernel.
 - Dropped patches for compressed kernel TPM PCR extender.
 - Modified event log code to insert log delimiter events and not rely
   on TPM access.
 - Stop extending PCRs in the early Secure Launch stub code.
 - Removed Kconfig options for hash algorithms and use the algorithms the
   ACM used.
 - Match Secure Launch measurement algorithm use to those reported in the
   TPM 2.0 event log.
 - Read the TPM events out of the TPM and extend them into the PCRs using
   the mainline TPM driver. This is done in the late initcall module.
 - Allow use of alternate PCR 19 and 20 for post ACM measurements.
 - Add Kconfig constraints needed by Secure Launch (disable KASLR
   and add x2apic dependency).
 - Fix testing of SL_FLAGS when determining if Secure Launch is active
   and the architecture is TXT.
 - Use SYM_DATA_START_LOCAL macros in early entry point code.
 - Security audit changes:
   - Validate buffers passed to MLE do not overlap the MLE and are
 properly laid out.
   - Validate buffers and memory regions used by the MLE are
 protected by IOMMU PMRs.
 - Force IOMMU to not use passthrough mode during a Secure Launch.
 - Prevent KASLR use during a Secure Launch.

Changes in v3:

 - Introduce x86 documentation patch to provide background, overview
   and configuration/ABI information for the Secure Launch kernel
   feature.
 - Remove the IOMMU patch with special cases for disabling IOMMU
   passthrough. Configuring the IOMMU is now a documentation matter
   in the previously mentioned new patch.
 - Remove special case KASLR disabling code. Configuring KASLR is now
   a documentation matter in the previously mentioned new patch.
 - Fix incorrect panic on TXT public register read.
 - Properly handle and measure setup_indirect bootparams in the early
   launch code.
 - Use correct compressed kernel image base address when testing buffers
   in the early launch stub code. This bug was introduced by the changes
   to avoid relocation in the compressed kernel.
 - Use CPUID feature bits instead of CPUID vendor strings to determine
   if SMX mode is supported and the system is Intel.
 - Remove early NMI re-enable on the BSP. This

[PATCH v11 20/20] x86/efi: EFI stub DRTM launch support for Secure Launch

2024-09-13 Thread Ross Philipson
This support allows the DRTM launch to be initiated after an EFI stub
launch of the Linux kernel is done. This is accomplished by providing
a handler to jump to when a Secure Launch is in progress. This has to be
called after the EFI stub does Exit Boot Services.

Signed-off-by: Ross Philipson 
Reviewed-by: Ard Biesheuvel 
---
 drivers/firmware/efi/libstub/efistub.h  |  8 ++
 drivers/firmware/efi/libstub/x86-stub.c | 99 +
 2 files changed, 107 insertions(+)

diff --git a/drivers/firmware/efi/libstub/efistub.h 
b/drivers/firmware/efi/libstub/efistub.h
index d33ccbc4a2c6..baf42d6d0796 100644
--- a/drivers/firmware/efi/libstub/efistub.h
+++ b/drivers/firmware/efi/libstub/efistub.h
@@ -135,6 +135,14 @@ void efi_set_u64_split(u64 data, u32 *lo, u32 *hi)
*hi = upper_32_bits(data);
 }
 
+static inline
+void efi_set_u64_form(u32 lo, u32 hi, u64 *data)
+{
+   u64 upper = hi;
+
+   *data = lo | upper << 32;
+}
+
 /*
  * Allocation types for calls to boottime->allocate_pages.
  */
diff --git a/drivers/firmware/efi/libstub/x86-stub.c 
b/drivers/firmware/efi/libstub/x86-stub.c
index f8e465da344d..2e063bce1080 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -9,6 +9,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #include 
 #include 
@@ -923,6 +925,98 @@ static efi_status_t efi_decompress_kernel(unsigned long 
*kernel_entry)
return efi_adjust_memory_range_protection(addr, kernel_text_size);
 }
 
+#if (IS_ENABLED(CONFIG_SECURE_LAUNCH))
+static bool efi_secure_launch_update_boot_params(struct slr_table *slrt,
+struct boot_params 
*boot_params)
+{
+   struct slr_entry_intel_info *txt_info;
+   struct slr_entry_policy *policy;
+   struct txt_os_mle_data *os_mle;
+   bool updated = false;
+   int i;
+
+   txt_info = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
+   if (!txt_info)
+   return false;
+
+   os_mle = txt_os_mle_data_start((void *)txt_info->txt_heap);
+   if (!os_mle)
+   return false;
+
+   os_mle->boot_params_addr = (u64)boot_params;
+
+   policy = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_ENTRY_POLICY);
+   if (!policy)
+   return false;
+
+   for (i = 0; i < policy->nr_entries; i++) {
+   if (policy->policy_entries[i].entity_type == 
SLR_ET_BOOT_PARAMS) {
+   policy->policy_entries[i].entity = (u64)boot_params;
+   updated = true;
+   break;
+   }
+   }
+
+   /*
+* If this is a PE entry into EFI stub the mocked up boot params will
+* be missing some of the setup header data needed for the second stage
+* of the Secure Launch boot.
+*/
+   if (image) {
+   struct setup_header *hdr = (struct setup_header *)((u8 
*)image->image_base +
+   offsetof(struct boot_params, hdr));
+   u64 cmdline_ptr;
+
+   boot_params->hdr.setup_sects = hdr->setup_sects;
+   boot_params->hdr.syssize = hdr->syssize;
+   boot_params->hdr.version = hdr->version;
+   boot_params->hdr.loadflags = hdr->loadflags;
+   boot_params->hdr.kernel_alignment = hdr->kernel_alignment;
+   boot_params->hdr.min_alignment = hdr->min_alignment;
+   boot_params->hdr.xloadflags = hdr->xloadflags;
+   boot_params->hdr.init_size = hdr->init_size;
+   boot_params->hdr.kernel_info_offset = hdr->kernel_info_offset;
+   efi_set_u64_form(boot_params->hdr.cmd_line_ptr, 
boot_params->ext_cmd_line_ptr,
+&cmdline_ptr);
+   boot_params->hdr.cmdline_size = strlen((const char 
*)cmdline_ptr);
+   }
+
+   return updated;
+}
+
+static void efi_secure_launch(struct boot_params *boot_params)
+{
+   struct slr_entry_dl_info *dlinfo;
+   efi_guid_t guid = SLR_TABLE_GUID;
+   dl_handler_func handler_callback;
+   struct slr_table *slrt;
+
+   /*
+* The presence of this table indicated a Secure Launch
+* is being requested.
+*/
+   slrt = (struct slr_table *)get_efi_config_table(guid);
+   if (!slrt || slrt->magic != SLR_TABLE_MAGIC)
+   return;
+
+   /*
+* Since the EFI stub library creates its own boot_params on entry, the
+* SLRT and TXT heap have to be updated with this version.
+*/
+   if (!efi_secure_launch_update_boot_params(slrt, boot_params))
+   return;
+
+   /* Jump through DL stub to initiate Secure Launch */
+   dlinfo = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);
+
+   handler_callback = (dl_handler_func)dlinfo->dl_

[PATCH v11 19/20] x86: Secure Launch late initcall platform module

2024-09-13 Thread Ross Philipson
From: "Daniel P. Smith" 

The Secure Launch platform module is a late init module. During the
init call, the TPM event log is read and measurements taken in the
early boot stub code are located. These measurements are extended
into the TPM PCRs using the mainline TPM kernel driver.

The platform module also registers the securityfs nodes to allow
access to TXT register fields on Intel along with the fetching of
and writing events to the late launch TPM log.

Signed-off-by: Daniel P. Smith 
Signed-off-by: garnetgrimm 
Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/slmodule.c | 508 +
 2 files changed, 509 insertions(+)
 create mode 100644 arch/x86/kernel/slmodule.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index a18a8239bde5..6028903d6661 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -73,6 +73,7 @@ obj-$(CONFIG_IA32_EMULATION)  += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
 obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slmodule.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-y  += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/slmodule.c b/arch/x86/kernel/slmodule.c
new file mode 100644
index ..6f85c43c4d3e
--- /dev/null
+++ b/arch/x86/kernel/slmodule.c
@@ -0,0 +1,508 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup, securityfs exposure and finalization.
+ *
+ * Copyright (c) 2024 Apertus Solutions, LLC
+ * Copyright (c) 2024 Assured Information Security, Inc.
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+/*
+ * The macro DECLARE_TXT_PUB_READ_U is used to read values from the TXT
+ * public registers as unsigned values.
+ */
+#define DECLARE_TXT_PUB_READ_U(size, fmt, msg_size)\
+static ssize_t txt_pub_read_u##size(unsigned int offset,   \
+   loff_t *read_offset,\
+   size_t read_len,\
+   char __user *buf)   \
+{  \
+   char msg_buffer[msg_size];  \
+   u##size reg_value = 0;  \
+   void __iomem *txt;  \
+   \
+   txt = ioremap(TXT_PUB_CONFIG_REGS_BASE, \
+   TXT_NR_CONFIG_PAGES * PAGE_SIZE);   \
+   if (!txt)   \
+   return -EFAULT; \
+   memcpy_fromio(®_value, txt + offset, sizeof(u##size));   \
+   iounmap(txt);   \
+   snprintf(msg_buffer, msg_size, fmt, reg_value); \
+   return simple_read_from_buffer(buf, read_len, read_offset,  \
+   &msg_buffer, msg_size); \
+}
+
+DECLARE_TXT_PUB_READ_U(8, "%#04x\n", 6);
+DECLARE_TXT_PUB_READ_U(32, "%#010x\n", 12);
+DECLARE_TXT_PUB_READ_U(64, "%#018llx\n", 20);
+
+#define DECLARE_TXT_FOPS(reg_name, reg_offset, reg_size)   \
+static ssize_t txt_##reg_name##_read(struct file *flip,
\
+   char __user *buf, size_t read_len, loff_t *read_offset) \
+{  \
+   return txt_pub_read_u##reg_size(reg_offset, read_offset,\
+   read_len, buf); \
+}  \
+static const struct file_operations reg_name##_ops = { \
+   .read = txt_##reg_name##_read,  \
+}
+
+DECLARE_TXT_FOPS(sts, TXT_CR_STS, 64);
+DECLARE_TXT_FOPS(ests, TXT_CR_ESTS, 8);
+DECLARE_TXT_FOPS(errorcode, TXT_CR_ERRORCODE, 32);
+DECLARE_TXT_FOPS(didvid, TXT_CR_DIDVID, 64);
+DECLARE_TXT_FOPS(e2sts, TXT_CR_E2STS, 64);
+DECLARE_TXT_FOPS(ver_emif, TXT_CR_VER_EMIF, 32);
+DECLARE_TXT_FOPS(scratchpad, TXT_CR_SCRATCHPAD, 64);
+
+/*
+ * Securityfs exposure
+ */
+struct memfile {
+   char *name;
+   void *addr;
+   size_t size;
+};
+
+static struct memfile sl_evtlog = {"eventlog", NULL, 0};
+static void *txt_heap;
+static struct txt_heap_event_log_pointer2_1_element *evtlog21;
+static DEFINE_MUTEX

[PATCH v11 18/20] tpm: Add sysfs interface to allow setting and querying the default locality

2024-09-13 Thread Ross Philipson
Expose a sysfs interface to allow user mode to set and query the default
locality set for the TPM chip.

Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm-sysfs.c | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/drivers/char/tpm/tpm-sysfs.c b/drivers/char/tpm/tpm-sysfs.c
index 94231f052ea7..185a2f57d4cb 100644
--- a/drivers/char/tpm/tpm-sysfs.c
+++ b/drivers/char/tpm/tpm-sysfs.c
@@ -324,6 +324,34 @@ static ssize_t null_name_show(struct device *dev, struct 
device_attribute *attr,
 static DEVICE_ATTR_RO(null_name);
 #endif
 
+static ssize_t default_locality_show(struct device *dev,
+struct device_attribute *attr, char *buf)
+{
+   struct tpm_chip *chip = to_tpm_chip(dev);
+
+   return sprintf(buf, "%d\n", chip->default_locality);
+}
+
+static ssize_t default_locality_store(struct device *dev, struct 
device_attribute *attr,
+ const char *buf, size_t count)
+{
+   struct tpm_chip *chip = to_tpm_chip(dev);
+   unsigned int locality;
+
+   if (kstrtouint(buf, 0, &locality))
+   return -ERANGE;
+
+   if (locality >= TPM_MAX_LOCALITY)
+   return -ERANGE;
+
+   if (tpm_chip_set_default_locality(chip, (int)locality))
+   return count;
+   else
+   return 0;
+}
+
+static DEVICE_ATTR_RW(default_locality);
+
 static struct attribute *tpm1_dev_attrs[] = {
&dev_attr_pubek.attr,
&dev_attr_pcrs.attr,
@@ -336,6 +364,7 @@ static struct attribute *tpm1_dev_attrs[] = {
&dev_attr_durations.attr,
&dev_attr_timeouts.attr,
&dev_attr_tpm_version_major.attr,
+   &dev_attr_default_locality.attr,
NULL,
 };
 
@@ -344,6 +373,7 @@ static struct attribute *tpm2_dev_attrs[] = {
 #ifdef CONFIG_TCG_TPM2_HMAC
&dev_attr_null_name.attr,
 #endif
+   &dev_attr_default_locality.attr,
NULL
 };
 
-- 
2.39.3




[PATCH v11 17/20] tpm: Add ability to set the default locality the TPM chip uses

2024-09-13 Thread Ross Philipson
Curently the locality is hard coded to 0 but for DRTM support, access
is needed to localities 1 through 4.

Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm-chip.c | 24 +++-
 include/linux/tpm.h |  4 
 2 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index 854546000c92..1ca390a742ed 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -44,7 +44,7 @@ static int tpm_request_locality(struct tpm_chip *chip)
if (!chip->ops->request_locality)
return 0;
 
-   rc = chip->ops->request_locality(chip, 0);
+   rc = chip->ops->request_locality(chip, chip->default_locality);
if (rc < 0)
return rc;
 
@@ -143,6 +143,27 @@ void tpm_chip_stop(struct tpm_chip *chip)
 }
 EXPORT_SYMBOL_GPL(tpm_chip_stop);
 
+/**
+ * tpm_chip_set_default_locality() - set the TPM chip default locality to open
+ * @chip:  a TPM chip to use
+ * @locality:   the default locality to set
+ *
+ * Return:
+ * * true  - Preferred locality set
+ * * false - Invalid locality specified
+ */
+bool tpm_chip_set_default_locality(struct tpm_chip *chip, int locality)
+{
+   if (locality < 0 || locality >=TPM_MAX_LOCALITY)
+   return false;
+
+   mutex_lock(&chip->tpm_mutex);
+   chip->default_locality = locality;
+   mutex_unlock(&chip->tpm_mutex);
+   return true;
+}
+EXPORT_SYMBOL_GPL(tpm_chip_set_default_locality);
+
 /**
  * tpm_try_get_ops() - Get a ref to the tpm_chip
  * @chip: Chip to ref
@@ -374,6 +395,7 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
}
 
chip->locality = -1;
+   chip->default_locality = 0;
return chip;
 
 out:
diff --git a/include/linux/tpm.h b/include/linux/tpm.h
index 98f2c7c1c52e..83e94b2f0cef 100644
--- a/include/linux/tpm.h
+++ b/include/linux/tpm.h
@@ -219,6 +219,9 @@ struct tpm_chip {
u8 null_ec_key_y[EC_PT_SZ];
struct tpm2_auth *auth;
 #endif
+
+   /* preferred locality - default 0 */
+   int default_locality;
 };
 
 #define TPM_HEADER_SIZE10
@@ -446,6 +449,7 @@ static inline u32 tpm2_rc_value(u32 rc)
 extern int tpm_is_tpm2(struct tpm_chip *chip);
 extern __must_check int tpm_try_get_ops(struct tpm_chip *chip);
 extern void tpm_put_ops(struct tpm_chip *chip);
+extern bool tpm_chip_set_default_locality(struct tpm_chip *chip, int locality);
 extern ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_buf *buf,
size_t min_rsp_body_length, const char *desc);
 extern int tpm_pcr_read(struct tpm_chip *chip, u32 pcr_idx,
-- 
2.39.3




[PATCH v11 16/20] tpm: Make locality requests return consistent values

2024-09-13 Thread Ross Philipson
From: "Daniel P. Smith" 

The function tpm_tis_request_locality() is expected to return the locality
value that was requested, or a negative error code upon failure. If it is called
while locality_count of struct tis_data is non-zero, no actual locality request
will be sent. Because the ret variable is initially set to 0, the
locality_count will still get increased, and the function will return 0. For a
caller, this would indicate that locality 0 was successfully requested and not
the state changes just mentioned.

Additionally, the function __tpm_tis_request_locality() provides inconsistent
error codes. It will provide either a failed IO write or a -1 should it have
timed out waiting for locality request to succeed.

This commit changes __tpm_tis_request_locality() to return valid negative error
codes to reflect the reason it fails. It then adjusts the return value check in
tpm_tis_request_locality() to check for a non-negative return value before
incrementing locality_cout. In addition, the initial value of the ret value is
set to a negative error to ensure the check does not pass if
__tpm_tis_request_locality() is not called.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm_tis_core.c | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
index 22ebf679ea69..20a8b341be0d 100644
--- a/drivers/char/tpm/tpm_tis_core.c
+++ b/drivers/char/tpm/tpm_tis_core.c
@@ -210,7 +210,7 @@ static int __tpm_tis_request_locality(struct tpm_chip 
*chip, int l)
 again:
timeout = stop - jiffies;
if ((long)timeout <= 0)
-   return -1;
+   return -EBUSY;
rc = wait_event_interruptible_timeout(priv->int_queue,
  (check_locality
   (chip, l)),
@@ -229,18 +229,21 @@ static int __tpm_tis_request_locality(struct tpm_chip 
*chip, int l)
tpm_msleep(TPM_TIMEOUT);
} while (time_before(jiffies, stop));
}
-   return -1;
+   return -EBUSY;
 }
 
 static int tpm_tis_request_locality(struct tpm_chip *chip, int l)
 {
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
-   int ret = 0;
+   int ret = -EBUSY;
+
+   if (l < 0 || l > TPM_MAX_LOCALITY)
+   return -EINVAL;
 
mutex_lock(&priv->locality_count_mutex);
if (priv->locality_count == 0)
ret = __tpm_tis_request_locality(chip, l);
-   if (!ret)
+   if (ret >= 0)
priv->locality_count++;
mutex_unlock(&priv->locality_count_mutex);
return ret;
-- 
2.39.3




[PATCH v11 15/20] tpm: Ensure tpm is in known state at startup

2024-09-13 Thread Ross Philipson
From: "Daniel P. Smith" 

When tis_tis_core initializes, it assumes all localities are closed. There
are cases when this may not be the case. This commit addresses this by
ensuring all localities are closed before initializing begins.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm_tis_core.c | 11 ++-
 include/linux/tpm.h |  6 ++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
index a6967f312837..22ebf679ea69 100644
--- a/drivers/char/tpm/tpm_tis_core.c
+++ b/drivers/char/tpm/tpm_tis_core.c
@@ -1107,7 +1107,7 @@ int tpm_tis_core_init(struct device *dev, struct 
tpm_tis_data *priv, int irq,
u32 intmask;
u32 clkrun_val;
u8 rid;
-   int rc, probe;
+   int rc, probe, i;
struct tpm_chip *chip;
 
chip = tpmm_chip_alloc(dev, &tpm_tis);
@@ -1169,6 +1169,15 @@ int tpm_tis_core_init(struct device *dev, struct 
tpm_tis_data *priv, int irq,
goto out_err;
}
 
+   /*
+* There are environments, for example, those that comply with the TCG 
D-RTM
+* specification that requires the TPM to be left in Locality 2.
+*/
+   for (i = 0; i <= TPM_MAX_LOCALITY; i++) {
+   if (check_locality(chip, i))
+   tpm_tis_relinquish_locality(chip, i);
+   }
+
/* Take control of the TPM's interrupt hardware and shut it off */
rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask);
if (rc < 0)
diff --git a/include/linux/tpm.h b/include/linux/tpm.h
index e93ee8d936a9..98f2c7c1c52e 100644
--- a/include/linux/tpm.h
+++ b/include/linux/tpm.h
@@ -147,6 +147,12 @@ struct tpm_chip_seqops {
  */
 #define TPM2_MAX_CONTEXT_SIZE 4096
 
+/*
+ * The maximum locality (0 - 4) for a TPM, as defined in section 3.2 of the
+ * Client Platform Profile Specification.
+ */
+#define TPM_MAX_LOCALITY   4
+
 struct tpm_chip {
struct device dev;
struct device devs;
-- 
2.39.3




[PATCH v11 14/20] tpm: Protect against locality counter underflow

2024-09-13 Thread Ross Philipson
From: "Daniel P. Smith" 

Commit 933bfc5ad213 introduced the use of a locality counter to control when a
locality request is allowed to be sent to the TPM. In the commit, the counter
is indiscriminately decremented. Thus creating a situation for an integer
underflow of the counter.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
Reported-by: Kanth Ghatraju 
---
 drivers/char/tpm/tpm_tis_core.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
index fdef214b9f6b..a6967f312837 100644
--- a/drivers/char/tpm/tpm_tis_core.c
+++ b/drivers/char/tpm/tpm_tis_core.c
@@ -180,7 +180,10 @@ static int tpm_tis_relinquish_locality(struct tpm_chip 
*chip, int l)
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
 
mutex_lock(&priv->locality_count_mutex);
-   priv->locality_count--;
+   if (priv->locality_count > 0)
+   priv->locality_count--;
+   else
+   pr_info("Invalid: locality count dropped below zero\n");
if (priv->locality_count == 0)
__tpm_tis_relinquish_locality(priv, l);
mutex_unlock(&priv->locality_count_mutex);
-- 
2.39.3




[PATCH v11 13/20] x86/reboot: Secure Launch SEXIT support on reboot paths

2024-09-13 Thread Ross Philipson
If the MLE kernel is being powered off, rebooted or halted,
then SEXIT must be called. Note that the SEXIT GETSEC leaf
can only be called after a machine_shutdown() has been done on
these paths. The machine_shutdown() is not called on a few paths
like when poweroff action does not have a poweroff callback (into
ACPI code) or when an emergency reset is done. In these cases,
just the TXT registers are finalized but SEXIT is skipped.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/reboot.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index 0e0a4cf6b5eb..c66e8896d516 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -778,6 +779,7 @@ static void native_machine_restart(char *__unused)
 
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
__machine_emergency_restart(0);
 }
 
@@ -788,6 +790,9 @@ static void native_machine_halt(void)
 
tboot_shutdown(TB_SHUTDOWN_HALT);
 
+   /* SEXIT done after machine_shutdown() to meet TXT requirements */
+   slaunch_finalize(1);
+
stop_this_cpu(NULL);
 }
 
@@ -796,8 +801,12 @@ static void native_machine_power_off(void)
if (kernel_can_power_off()) {
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
do_kernel_power_off();
+   } else {
+   slaunch_finalize(0);
}
+
/* A fallback in case there is no PM info available */
tboot_shutdown(TB_SHUTDOWN_HALT);
 }
@@ -825,6 +834,7 @@ void machine_shutdown(void)
 
 void machine_emergency_restart(void)
 {
+   slaunch_finalize(0);
__machine_emergency_restart(1);
 }
 
-- 
2.39.3




[PATCH v11 12/20] kexec: Secure Launch kexec SEXIT support

2024-09-13 Thread Ross Philipson
Prior to running the next kernel via kexec, the Secure Launch code
closes down private SMX resources and does an SEXIT. This allows the
next kernel to start normally without any issues starting the APs etc.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/slaunch.c | 72 +++
 kernel/kexec_core.c   |  4 +++
 2 files changed, 76 insertions(+)

diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index 5c54288ce980..c828d46f3271 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -522,3 +522,75 @@ void __init slaunch_setup_txt(void)
 
pr_info("Intel TXT setup complete\n");
 }
+
+static inline void smx_getsec_sexit(void)
+{
+   asm volatile ("getsec\n"
+ : : "a" (SMX_X86_GETSEC_SEXIT));
+}
+
+/*
+ * Used during kexec and on reboot paths to finalize the TXT state
+ * and do an SEXIT exiting the DRTM and disabling SMX mode.
+ */
+void slaunch_finalize(int do_sexit)
+{
+   u64 one = TXT_REGVALUE_ONE, val;
+   void __iomem *config;
+
+   if (!slaunch_is_txt_launch())
+   return;
+
+   config = ioremap(TXT_PRIV_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT private reqs\n");
+   return;
+   }
+
+   /* Clear secrets bit for SEXIT */
+   memcpy_toio(config + TXT_CR_CMD_NO_SECRETS, &one, sizeof(one));
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Unlock memory configurations */
+   memcpy_toio(config + TXT_CR_CMD_UNLOCK_MEM_CONFIG, &one, sizeof(one));
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Close the TXT private register space */
+   memcpy_toio(config + TXT_CR_CMD_CLOSE_PRIVATE, &one, sizeof(one));
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   /*
+* Calls to iounmap are not being done because of the state of the
+* system this late in the kexec process. Local IRQs are disabled and
+* iounmap causes a TLB flush which in turn causes a warning. Leaving
+* thse mappings is not an issue since the next kernel is going to
+* completely re-setup memory management.
+*/
+
+   /* Map public registers and do a final read fence */
+   config = ioremap(TXT_PUB_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT public reqs\n");
+   return;
+   }
+
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   pr_emerg("TXT clear secrets bit and unlock memory complete.\n");
+
+   if (!do_sexit)
+   return;
+
+   if (smp_processor_id() != 0)
+   panic("Error TXT SEXIT must be called on CPU 0\n");
+
+   /* In case SMX mode was disabled, enable it for SEXIT */
+   cr4_set_bits(X86_CR4_SMXE);
+
+   /* Do the SEXIT SMX operation */
+   smx_getsec_sexit();
+
+   pr_info("TXT SEXIT complete.\n");
+}
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index c0caa14880c3..53d5ae8326a3 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -40,6 +40,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1045,6 +1046,9 @@ int kernel_kexec(void)
cpu_hotplug_enable();
pr_notice("Starting new kernel\n");
machine_shutdown();
+
+   /* Finalize TXT registers and do SEXIT */
+   slaunch_finalize(1);
}
 
kmsg_dump(KMSG_DUMP_SHUTDOWN);
-- 
2.39.3




[PATCH v11 11/20] x86: Secure Launch SMP bringup support

2024-09-13 Thread Ross Philipson
On Intel, the APs are left in a well documented state after TXT performs
the late launch. Specifically they cannot have #INIT asserted on them so
a standard startup via INIT/SIPI/SIPI cannot be performed. Instead the
early SL stub code uses MONITOR and MWAIT to park the APs. The realmode/init.c
code updates the jump address for the waiting APs with the location of the
Secure Launch entry point in the RM piggy after it is loaded and fixed up.
As the APs are woken up by writing the monitor, the APs jump to the Secure
Launch entry point in the RM piggy which mimics what the real mode code would
do then jumps to the standard RM piggy protected mode entry point.

Signed-off-by: Ross Philipson 
---
 arch/x86/include/asm/realmode.h  |  3 ++
 arch/x86/kernel/smpboot.c| 43 ++--
 arch/x86/realmode/init.c |  3 ++
 arch/x86/realmode/rm/header.S|  3 ++
 arch/x86/realmode/rm/trampoline_64.S | 32 +
 5 files changed, 82 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index 87e5482acd0d..339b48e2543d 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -38,6 +38,9 @@ struct real_mode_header {
 #ifdef CONFIG_X86_64
u32 machine_real_restart_seg;
 #endif
+#ifdef CONFIG_SECURE_LAUNCH
+   u32 sl_trampoline_start32;
+#endif
 };
 
 /* This must match data at realmode/rm/trampoline_{32,64}.S */
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 0c35207320cb..0c915e105a9b 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -60,6 +60,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -868,6 +869,41 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
return 0;
 }
 
+#ifdef CONFIG_SECURE_LAUNCH
+
+/*
+ * TXT AP startup is quite different than normal. The APs cannot have #INIT
+ * asserted on them or receive SIPIs. The early Secure Launch code has parked
+ * the APs using monitor/mwait. This will wake the APs by writing the monitor
+ * and have them jump to the protected mode code in the rmpiggy where the rest
+ * of the SMP boot of the AP will proceed normally.
+ */
+static void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+   struct sl_ap_stack_and_monitor *stack_monitor;
+   struct sl_ap_wake_info *ap_wake_info;
+
+   ap_wake_info = slaunch_get_ap_wake_info();
+
+   stack_monitor = (struct sl_ap_stack_and_monitor 
*)__va(ap_wake_info->ap_wake_block +
+  
ap_wake_info->ap_stacks_offset);
+
+   for (unsigned int i = TXT_MAX_CPUS - 1; i >= 0; i--) {
+   if (stack_monitor[i].apicid == apicid) {
+   stack_monitor[i].monitor = 1;
+   break;
+   }
+   }
+}
+
+#else
+
+static inline void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+}
+
+#endif  /* !CONFIG_SECURE_LAUNCH */
+
 /*
  * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
  * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
@@ -877,7 +913,7 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
 static int do_boot_cpu(u32 apicid, int cpu, struct task_struct *idle)
 {
unsigned long start_ip = real_mode_header->trampoline_start;
-   int ret;
+   int ret = 0;
 
 #ifdef CONFIG_X86_64
/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
@@ -922,12 +958,15 @@ static int do_boot_cpu(u32 apicid, int cpu, struct 
task_struct *idle)
 
/*
 * Wake up a CPU in difference cases:
+* - Intel TXT DRTM launch uses its own method to wake the APs
 * - Use a method from the APIC driver if one defined, with wakeup
 *   straight to 64-bit mode preferred over wakeup to RM.
 * Otherwise,
 * - Use an INIT boot APIC message
 */
-   if (apic->wakeup_secondary_cpu_64)
+   if (slaunch_is_txt_launch())
+   slaunch_wakeup_cpu_from_txt(cpu, apicid);
+   else if (apic->wakeup_secondary_cpu_64)
ret = apic->wakeup_secondary_cpu_64(apicid, start_ip);
else if (apic->wakeup_secondary_cpu)
ret = apic->wakeup_secondary_cpu(apicid, start_ip);
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index f9bc444a3064..d95776cb30d3 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -4,6 +4,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -210,6 +211,8 @@ void __init init_real_mode(void)
 
setup_real_mode();
set_real_mode_permissions();
+
+   slaunch_fixup_jump_vector();
 }
 
 static int __init do_init_real_mode(void)
diff --git a/arch/x86/realmode/rm/header.S b/arch/x86/realmode/rm/header.S
index 2eb62be6d256..3b5cbcbbfc90 100644
--- a/arch/x86/realmode/rm/header.S

[PATCH v11 10/20] x86: Secure Launch kernel late boot stub

2024-09-13 Thread Ross Philipson
The routine slaunch_setup is called out of the x86 specific setup_arch()
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.

Intel VT-d/IOMMU hardware provides special registers called Protected
Memory Regions (PMRs) that allow all memory to be protected from
DMA during a TXT DRTM launch. This coverage is validated during the
late setup process to ensure DMA protection is in place prior to
the IOMMUs being initialized and configured by the mainline kernel.
See the Intel Trusted Execution Technology - Measured Launch Environment
Developer's Guide for more details.

For TXT, this code also reserves the original compressed kernel setup
area where the APs were left looping so that this memory cannot be used.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/setup.c|   3 +
 arch/x86/kernel/slaunch.c  | 524 +
 drivers/iommu/intel/dmar.c |   4 +
 4 files changed, 532 insertions(+)
 create mode 100644 arch/x86/kernel/slaunch.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index a847180836e4..a18a8239bde5 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -72,6 +72,7 @@ obj-$(CONFIG_X86_32)  += tls.o
 obj-$(CONFIG_IA32_EMULATION)   += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-y  += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 6129dc2ba784..d915f21306aa 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -938,6 +939,8 @@ void __init setup_arch(char **cmdline_p)
early_gart_iommu_check();
 #endif
 
+   slaunch_setup_txt();
+
/*
 * partially used pages are not usable - thus
 * we are rounding upwards:
diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
new file mode 100644
index ..5c54288ce980
--- /dev/null
+++ b/arch/x86/kernel/slaunch.c
@@ -0,0 +1,524 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup and finalization support.
+ *
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+static u32 sl_flags __ro_after_init;
+static struct sl_ap_wake_info ap_wake_info __ro_after_init;
+static u64 evtlog_addr __ro_after_init;
+static u32 evtlog_size __ro_after_init;
+static u64 vtd_pmr_lo_size __ro_after_init;
+
+/* This should be plenty of room */
+static u8 txt_dmar[PAGE_SIZE] __aligned(16);
+
+/*
+ * Get the Secure Launch flags that indicate what kind of launch is being done.
+ * E.g. a TXT launch is in progress or no Secure Launch is happening.
+ */
+u32 slaunch_get_flags(void)
+{
+   return sl_flags;
+}
+
+/*
+ * Return the AP wakeup information used in the SMP boot code to start up
+ * the APs that are parked using MONITOR/MWAIT.
+ */
+struct sl_ap_wake_info *slaunch_get_ap_wake_info(void)
+{
+   return &ap_wake_info;
+}
+
+/*
+ * On Intel platforms, TXT passes a safe copy of the DMAR ACPI table to the
+ * DRTM. The DRTM is supposed to use this instead of the one found in the
+ * ACPI tables.
+ */
+struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table_header 
*dmar)
+{
+   /* The DMAR is only stashed and provided via TXT on Intel systems */
+   if (memcmp(txt_dmar, "DMAR", 4))
+   return dmar;
+
+   return (struct acpi_table_header *)(txt_dmar);
+}
+
+/*
+ * If running within a TXT established DRTM, this is the proper way to reset
+ * the system if a failure occurs or a security issue is found.
+ */
+void __noreturn slaunch_txt_reset(void __iomem *txt,
+ const char *msg, u64 error)
+{
+   u64 one = 1, val;
+
+   pr_err("%s", msg);
+
+   /*
+* This performs a TXT reset with a sticky error code. The reads of
+* TXT_CR_E2STS act as barriers.
+*/
+   memcpy_toio(txt + TXT_CR_ERRORCODE, &error, sizeof(error));
+   memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_NO_SECRETS, &one, sizeof(one));
+   memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_UNLOCK_MEM_CONFIG, &one, sizeof(one));
+   memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy

[PATCH v11 09/20] x86: Secure Launch kernel early boot stub

2024-09-13 Thread Ross Philipson
The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
 Documentation/arch/x86/boot.rst   |  21 +
 arch/x86/boot/compressed/Makefile |   3 +-
 arch/x86/boot/compressed/head_64.S|  29 +
 arch/x86/boot/compressed/sl_main.c| 584 +
 arch/x86/boot/compressed/sl_stub.S| 726 ++
 arch/x86/include/uapi/asm/bootparam.h |   1 +
 arch/x86/kernel/asm-offsets.c |  20 +
 7 files changed, 1383 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/boot/compressed/sl_main.c
 create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
- If 1, KASLR enabled.
- If 0, KASLR disabled.
 
+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
   Bit 5 (write): QUIET_FLAG
 
- If 0, print early messages.
@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4
 
   This field contains maximal allowed type for setup_data and setup_indirect 
structs.
 
+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
+
 
 The Image Checksum
 ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 40dc0b9babd5..ce651eaa68dd 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -107,7 +107,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-libs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/sha1.o $(obj)/sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/sha1.o $(obj)/sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o
 
 $(obj)/vmlinux: $(vmlinux-objs-y) $(vmlinux-libs-y) FORCE
$(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..545329c97377 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
pushq   $0
popfq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   /* Ensure the relocation region is covered by a PMR */
+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq   sl_check_region
+#endif
+
 /*
  * Copy the compressed kernel to the end of our buffer
  * where decompression in place becomes safe.
@@ -462,6 +469,28 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
shrq$3, %rcx
rep stosq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   /*
+* Have to do the final early sl stub work in 64b area.
+*
+* *** NOTE ***
+*
+* Several boot params get used before we get a chance to measure
+* them in this call. This is a known issue and we currently don't
+* have a solution. The scratch field doesn't matter. There is no
+* obvious way to do anything about the use of kernel_alignment or
+* init_size though these seem low risk with all the PMR and overlap
+* checks in place.
+*/
+   movq%r15, %rdi
+  

[PATCH v11 08/20] x86/boot: Place TXT MLE header in the kernel_info section

2024-09-13 Thread Ross Philipson
The MLE (measured launch environment) header must be locatable by the
boot loader and TXT must be setup to do a launch with this header's
location. While the offset to the kernel_info structure does not need
to be at a fixed offset, the offsets in the header must be relative
offsets from the start of the setup kernel. The support in the linker
file achieves this.

Signed-off-by: Ross Philipson 
Suggested-by: Ard Biesheuvel 
Reviewed-by: Ard Biesheuvel 
---
 arch/x86/boot/compressed/kernel_info.S | 50 +++---
 arch/x86/boot/compressed/vmlinux.lds.S |  7 
 2 files changed, 53 insertions(+), 4 deletions(-)

diff --git a/arch/x86/boot/compressed/kernel_info.S 
b/arch/x86/boot/compressed/kernel_info.S
index f818ee8fba38..a0604a0d1756 100644
--- a/arch/x86/boot/compressed/kernel_info.S
+++ b/arch/x86/boot/compressed/kernel_info.S
@@ -1,12 +1,20 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 
+#include 
 #include 
 
-   .section ".rodata.kernel_info", "a"
+/*
+ * The kernel_info structure is not placed at a fixed offest in the
+ * kernel image. So this macro and the support in the linker file
+ * allow the relative offsets for the MLE header within the kernel
+ * image to be configured at build time.
+ */
+#define roffset(X) ((X) - kernel_info)
 
-   .global kernel_info
+   .section ".rodata.kernel_info", "a"
 
-kernel_info:
+   .balign 16
+SYM_DATA_START(kernel_info)
/* Header, Linux top (structure). */
.ascii  "LToP"
/* Size. */
@@ -17,6 +25,40 @@ kernel_info:
/* Maximal allowed type for setup_data and setup_indirect structs. */
.long   SETUP_TYPE_MAX
 
+   /* Offset to the MLE header structure */
+#if IS_ENABLED(CONFIG_SECURE_LAUNCH)
+   .long   roffset(mle_header_offset)
+#else
+   .long   0
+#endif
+
 kernel_info_var_len_data:
/* Empty for time being... */
-kernel_info_end:
+SYM_DATA_END_LABEL(kernel_info, SYM_L_LOCAL, kernel_info_end)
+
+#if IS_ENABLED(CONFIG_SECURE_LAUNCH)
+   /*
+* The MLE Header per the TXT Specification, section 2.1
+* MLE capabilities, see table 4. Capabilities set:
+* bit 0: Support for GETSEC[WAKEUP] for RLP wakeup
+* bit 1: Support for RLP wakeup using MONITOR address
+* bit 2: The ECX register will contain the pointer to the MLE page 
table
+* bit 5: TPM 1.2 family: Details/authorities PCR usage support
+* bit 9: Supported format of TPM 2.0 event log - TCG compliant
+*/
+SYM_DATA_START(mle_header)
+   .long   0x9082ac5a  /* UUID0 */
+   .long   0x74a7476f  /* UUID1 */
+   .long   0xa2555c0f  /* UUID2 */
+   .long   0x42b651cb  /* UUID3 */
+   .long   0x0034  /* MLE header size */
+   .long   0x00020002  /* MLE version 2.2 */
+   .long   roffset(sl_stub_entry_offset) /* Linear entry point of MLE 
(virt. address) */
+   .long   0x  /* First valid page of MLE */
+   .long   0x  /* Offset within binary of first byte of MLE */
+   .long   roffset(_edata_offset) /* Offset within binary of last byte + 1 
of MLE */
+   .long   0x0227  /* Bit vector of MLE-supported capabilities */
+   .long   0x  /* Starting linear address of command line (unused) 
*/
+   .long   0x  /* Ending linear address of command line (unused) */
+SYM_DATA_END(mle_header)
+#endif
diff --git a/arch/x86/boot/compressed/vmlinux.lds.S 
b/arch/x86/boot/compressed/vmlinux.lds.S
index 083ec6d7722a..f82184801462 100644
--- a/arch/x86/boot/compressed/vmlinux.lds.S
+++ b/arch/x86/boot/compressed/vmlinux.lds.S
@@ -118,3 +118,10 @@ SECTIONS
}
ASSERT(SIZEOF(.rela.dyn) == 0, "Unexpected run-time relocations (.rela) 
detected!")
 }
+
+#ifdef CONFIG_SECURE_LAUNCH
+PROVIDE(kernel_info_offset  = ABSOLUTE(kernel_info - startup_32));
+PROVIDE(mle_header_offset   = kernel_info_offset + ABSOLUTE(mle_header - 
startup_32));
+PROVIDE(sl_stub_entry_offset= kernel_info_offset + ABSOLUTE(sl_stub_entry 
- startup_32));
+PROVIDE(_edata_offset   = kernel_info_offset + ABSOLUTE(_edata - 
startup_32));
+#endif
-- 
2.39.3




[PATCH v11 07/20] x86/msr: Add variable MTRR base/mask and x2apic ID registers

2024-09-13 Thread Ross Philipson
These values are needed by Secure Launch to locate particular CPUs
during AP startup and to restore the MTRR state after a TXT launch.

Signed-off-by: Ross Philipson 
---
 arch/x86/include/asm/msr-index.h | 5 +
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 82c6a4d350e0..9fbc0e554f99 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -348,6 +348,9 @@
 #define MSR_IA32_RTIT_OUTPUT_BASE  0x0560
 #define MSR_IA32_RTIT_OUTPUT_MASK  0x0561
 
+#define MSR_MTRRphysBase0  0x0200
+#define MSR_MTRRphysMask0  0x0201
+
 #define MSR_MTRRfix64K_0   0x0250
 #define MSR_MTRRfix16K_8   0x0258
 #define MSR_MTRRfix16K_A   0x0259
@@ -859,6 +862,8 @@
 #define MSR_IA32_APICBASE_ENABLE   (1<<11)
 #define MSR_IA32_APICBASE_BASE (0xf<<12)
 
+#define MSR_IA32_X2APIC_APICID 0x0802
+
 #define MSR_IA32_UCODE_WRITE   0x0079
 #define MSR_IA32_UCODE_REV 0x008b
 
-- 
2.39.3




[PATCH v11 06/20] x86: Add early SHA-256 support for Secure Launch early measurements

2024-09-13 Thread Ross Philipson
From: "Daniel P. Smith" 

The SHA-256 algorithm is necessary to measure configuration information into
the TPM as early as possible before using the values. This implementation
uses the established approach of #including the SHA-256 libraries directly in
the code since the compressed kernel is not uncompressed at this point.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile | 2 +-
 arch/x86/boot/compressed/sha256.c | 6 ++
 2 files changed, 7 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/boot/compressed/sha256.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 7eb03afb841b..40dc0b9babd5 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -107,7 +107,7 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-libs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/sha1.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/sha1.o $(obj)/sha256.o
 
 $(obj)/vmlinux: $(vmlinux-objs-y) $(vmlinux-libs-y) FORCE
$(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/sha256.c 
b/arch/x86/boot/compressed/sha256.c
new file mode 100644
index ..293742a90ddc
--- /dev/null
+++ b/arch/x86/boot/compressed/sha256.c
@@ -0,0 +1,6 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2024 Apertus Solutions, LLC
+ */
+
+#include "../../../../lib/crypto/sha256.c"
-- 
2.39.3




[PATCH v11 05/20] x86: Add early SHA-1 support for Secure Launch early measurements

2024-09-13 Thread Ross Philipson
From: "Daniel P. Smith" 

Secure Launch is written to be compliant with the Intel TXT Measured Launch
Developer's Guide. The MLE Guide dictates that the system can be configured to
use both the SHA-1 and SHA-2 hashing algorithms.

Regardless of the preference towards SHA-2, if the firmware elected to start
with the SHA-1 and SHA-2 banks active and the dynamic launch was configured to
include SHA-1, Secure Launch is obligated to record measurements for all
algorithms requested in the launch configuration.

The user environment or the integrity management does not desire to use SHA-1,
it is free to just ignore the SHA-1 bank in any integrity operation with the
TPM. If there is a larger concern about the SHA-1 bank being active, it is free
to deliberately cap the SHA-1 PCRs, recording the event in the D-RTM log.

The SHA-1 code here has its origins in the code from the main kernel:

commit c4d5b9f ("crypto: sha1 - implement base layer for SHA-1")

A modified version of this code was introduced to the lib/crypto/sha1.c to bring
it in line with the SHA-256 code and allow it to be pulled into the setup kernel
in the same manner as SHA-256 is.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile |  2 +
 arch/x86/boot/compressed/sha1.c   |  6 +++
 include/crypto/sha1.h |  1 +
 lib/crypto/sha1.c | 81 +++
 4 files changed, 90 insertions(+)
 create mode 100644 arch/x86/boot/compressed/sha1.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index f2051644de94..7eb03afb841b 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -107,6 +107,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-libs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/sha1.o
+
 $(obj)/vmlinux: $(vmlinux-objs-y) $(vmlinux-libs-y) FORCE
$(call if_changed,ld)
 
diff --git a/arch/x86/boot/compressed/sha1.c b/arch/x86/boot/compressed/sha1.c
new file mode 100644
index ..d754489941ac
--- /dev/null
+++ b/arch/x86/boot/compressed/sha1.c
@@ -0,0 +1,6 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2024 Apertus Solutions, LLC.
+ */
+
+#include "../../../../lib/crypto/sha1.c"
diff --git a/include/crypto/sha1.h b/include/crypto/sha1.h
index 044ecea60ac8..d715dd5332e1 100644
--- a/include/crypto/sha1.h
+++ b/include/crypto/sha1.h
@@ -42,5 +42,6 @@ extern int crypto_sha1_finup(struct shash_desc *desc, const 
u8 *data,
 #define SHA1_WORKSPACE_WORDS   16
 void sha1_init(__u32 *buf);
 void sha1_transform(__u32 *digest, const char *data, __u32 *W);
+void sha1(const u8 *data, unsigned int len, u8 *out);
 
 #endif /* _CRYPTO_SHA1_H */
diff --git a/lib/crypto/sha1.c b/lib/crypto/sha1.c
index 6d2922747cab..de11d22ebded 100644
--- a/lib/crypto/sha1.c
+++ b/lib/crypto/sha1.c
@@ -137,5 +137,86 @@ void sha1_init(__u32 *buf)
 }
 EXPORT_SYMBOL(sha1_init);
 
+static void __sha1_transform(u32 *digest, const char *data)
+{
+   u32 ws[SHA1_WORKSPACE_WORDS];
+
+   sha1_transform(digest, data, ws);
+
+   /* Ensure local data for generating digest is cleared in all cases */
+   memzero_explicit(ws, sizeof(ws));
+}
+
+static void sha1_update(struct sha1_state *sctx, const u8 *data, unsigned int 
len)
+{
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+   int blocks;
+
+   sctx->count += len;
+
+   if (unlikely((partial + len) < SHA1_BLOCK_SIZE))
+   goto out;
+
+   if (partial) {
+   int p = SHA1_BLOCK_SIZE - partial;
+
+   memcpy(sctx->buffer + partial, data, p);
+   data += p;
+   len -= p;
+
+   __sha1_transform(sctx->state, sctx->buffer);
+   }
+
+   blocks = len / SHA1_BLOCK_SIZE;
+   len %= SHA1_BLOCK_SIZE;
+
+   if (blocks) {
+   while (blocks--) {
+   __sha1_transform(sctx->state, data);
+   data += SHA1_BLOCK_SIZE;
+   }
+   }
+   partial = 0;
+
+out:
+   memcpy(sctx->buffer + partial, data, len);
+}
+
+static void sha1_final(struct sha1_state *sctx, u8 *out)
+{
+   const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+   __be64 *bits = (__be64 *)(sctx->buffer + bit_offset);
+   __be32 *digest = (__be32 *)out;
+   int i;
+
+   sctx->buffer[partial++] = 0x80;
+   if (partial > bit_offset) {
+   memset(sctx->buffer + partial, 0x0, SHA1_BLOCK_SIZE - partial);
+   partial = 0;
+
+   __sha1_transform(sctx->state, sctx->buffer);
+   }
+
+   memset(sctx->buffer + partial, 0x0, bit_offset - partial);
+   *bits 

[PATCH v11 04/20] x86: Secure Launch main header file

2024-09-13 Thread Ross Philipson
Introduce the main Secure Launch header file used in the early SL stub
and the early setup code.

Signed-off-by: Ross Philipson 
---
 include/linux/slaunch.h | 548 
 1 file changed, 548 insertions(+)
 create mode 100644 include/linux/slaunch.h

diff --git a/include/linux/slaunch.h b/include/linux/slaunch.h
new file mode 100644
index ..efb1235b3e1b
--- /dev/null
+++ b/include/linux/slaunch.h
@@ -0,0 +1,548 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Main Secure Launch header file.
+ *
+ * Copyright (c) 2024 Apertus Solutions, LLC
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLAUNCH_H
+#define _LINUX_SLAUNCH_H
+
+/*
+ * Secure Launch Defined State Flags
+ */
+#define SL_FLAG_ACTIVE 0x0001
+#define SL_FLAG_ARCH_TXT   0x0002
+
+/*
+ * Secure Launch CPU Type
+ */
+#define SL_CPU_INTEL   1
+
+#define __SL32_CS  0x0008
+#define __SL32_DS  0x0010
+
+/*
+ * Intel Safer Mode Extensions (SMX)
+ *
+ * Intel SMX provides a programming interface to establish a Measured Launched
+ * Environment (MLE). The measurement and protection mechanisms supported by 
the
+ * capabilities of an Intel Trusted Execution Technology (TXT) platform. SMX is
+ * the processor’s programming interface in an Intel TXT platform.
+ *
+ * See:
+ *   Intel SDM Volume 2 - 6.1 "Safer Mode Extensions Reference"
+ *   Intel Trusted Execution Technology - Measured Launch Environment 
Developer’s Guide
+ */
+
+/*
+ * SMX GETSEC Leaf Functions
+ */
+#define SMX_X86_GETSEC_SEXIT   5
+#define SMX_X86_GETSEC_SMCTRL  7
+#define SMX_X86_GETSEC_WAKEUP  8
+
+/*
+ * Intel Trusted Execution Technology MMIO Registers Banks
+ */
+#define TXT_PUB_CONFIG_REGS_BASE   0xfed3
+#define TXT_PRIV_CONFIG_REGS_BASE  0xfed2
+#define TXT_NR_CONFIG_PAGES ((TXT_PUB_CONFIG_REGS_BASE - \
+ TXT_PRIV_CONFIG_REGS_BASE) >> PAGE_SHIFT)
+
+/*
+ * Intel Trusted Execution Technology (TXT) Registers
+ */
+#define TXT_CR_STS 0x
+#define TXT_CR_ESTS0x0008
+#define TXT_CR_ERRORCODE   0x0030
+#define TXT_CR_CMD_RESET   0x0038
+#define TXT_CR_CMD_CLOSE_PRIVATE   0x0048
+#define TXT_CR_DIDVID  0x0110
+#define TXT_CR_VER_EMIF0x0200
+#define TXT_CR_CMD_UNLOCK_MEM_CONFIG   0x0218
+#define TXT_CR_SINIT_BASE  0x0270
+#define TXT_CR_SINIT_SIZE  0x0278
+#define TXT_CR_MLE_JOIN0x0290
+#define TXT_CR_HEAP_BASE   0x0300
+#define TXT_CR_HEAP_SIZE   0x0308
+#define TXT_CR_SCRATCHPAD  0x0378
+#define TXT_CR_CMD_OPEN_LOCALITY1  0x0380
+#define TXT_CR_CMD_CLOSE_LOCALITY1 0x0388
+#define TXT_CR_CMD_OPEN_LOCALITY2  0x0390
+#define TXT_CR_CMD_CLOSE_LOCALITY2 0x0398
+#define TXT_CR_CMD_SECRETS 0x08e0
+#define TXT_CR_CMD_NO_SECRETS  0x08e8
+#define TXT_CR_E2STS   0x08f0
+
+/* TXT default register value */
+#define TXT_REGVALUE_ONE   0x1ULL
+
+/* TXTCR_STS status bits */
+#define TXT_SENTER_DONE_STSBIT(0)
+#define TXT_SEXIT_DONE_STS BIT(1)
+
+/*
+ * SINIT/MLE Capabilities Field Bit Definitions
+ */
+#define TXT_SINIT_MLE_CAP_WAKE_GETSEC  0
+#define TXT_SINIT_MLE_CAP_WAKE_MONITOR 1
+
+/*
+ * OS/MLE Secure Launch Specific Definitions
+ */
+#define TXT_OS_MLE_STRUCT_VERSION  1
+#define TXT_OS_MLE_MAX_VARIABLE_MTRRS  32
+
+/*
+ * TXT Heap Table Enumeration
+ */
+#define TXT_BIOS_DATA_TABLE1
+#define TXT_OS_MLE_DATA_TABLE  2
+#define TXT_OS_SINIT_DATA_TABLE3
+#define TXT_SINIT_MLE_DATA_TABLE   4
+#define TXT_SINIT_TABLE_MAXTXT_SINIT_MLE_DATA_TABLE
+
+/*
+ * Secure Launch Defined Error Codes used in MLE-initiated TXT resets.
+ *
+ * TXT Specification
+ * Appendix I ACM Error Codes
+ */
+#define SL_ERROR_GENERIC   0xc0008001
+#define SL_ERROR_TPM_INIT  0xc0008002
+#define SL_ERROR_TPM_INVALID_LOG20 0xc0008003
+#define SL_ERROR_TPM_LOGGING_FAILED0xc0008004
+#define SL_ERROR_REGION_STRADDLE_4GB   0xc0008005
+#define SL_ERROR_TPM_EXTEND0xc0008006
+#define SL_ERROR_MTRR_INV_VCNT 0xc0008007
+#define SL_ERROR_MTRR_INV_DEF_TYPE 0xc0008008
+#define SL_ERROR_MTRR_INV_BASE 0xc0008009
+#define SL_ERROR_MTRR_INV_MASK 0xc000800a
+#define SL_ERROR_MSR_INV_MISC_EN   0xc000800b
+#define SL_ERROR_INV_AP_INTERRUPT  0xc000800c
+#define SL_ERROR_INTEGER_OVERFLOW  0xc000800d
+#define SL_ERROR_HEAP_WALK 0xc000800e
+#define SL_ERROR_HEAP_MAP  0xc000800f
+#define SL_ERROR_REGION_ABOVE_4GB  0xc0008010
+#define SL_ERROR_HEAP_INVALID_DMAR 0xc0008011
+#define SL_ERROR_HEAP_DMAR_SIZE0xc0008012
+#define SL_ERROR_HEAP_DMAR_MAP 0xc0008013
+#define SL_ERROR_HI_PMR_BASE   0xc

[PATCH v11 01/20] Documentation/x86: Secure Launch kernel documentation

2024-09-13 Thread Ross Philipson
From: "Daniel P. Smith" 

Introduce background, overview and configuration/ABI information
for the Secure Launch kernel feature.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
Reviewed-by: Bagas Sanjaya 
---
 Documentation/security/index.rst  |   1 +
 .../security/launch-integrity/index.rst   |  11 +
 .../security/launch-integrity/principles.rst  | 317 ++
 .../secure_launch_details.rst | 588 ++
 .../secure_launch_overview.rst| 252 
 5 files changed, 1169 insertions(+)
 create mode 100644 Documentation/security/launch-integrity/index.rst
 create mode 100644 Documentation/security/launch-integrity/principles.rst
 create mode 100644 
Documentation/security/launch-integrity/secure_launch_details.rst
 create mode 100644 
Documentation/security/launch-integrity/secure_launch_overview.rst

diff --git a/Documentation/security/index.rst b/Documentation/security/index.rst
index 59f8fc106cb0..56e31fb3d91f 100644
--- a/Documentation/security/index.rst
+++ b/Documentation/security/index.rst
@@ -19,3 +19,4 @@ Security Documentation
digsig
landlock
secrets/index
+   launch-integrity/index
diff --git a/Documentation/security/launch-integrity/index.rst 
b/Documentation/security/launch-integrity/index.rst
new file mode 100644
index ..838328186dd2
--- /dev/null
+++ b/Documentation/security/launch-integrity/index.rst
@@ -0,0 +1,11 @@
+=
+System Launch Integrity documentation
+=
+
+.. toctree::
+   :maxdepth: 1
+
+   principles
+   secure_launch_overview
+   secure_launch_details
+
diff --git a/Documentation/security/launch-integrity/principles.rst 
b/Documentation/security/launch-integrity/principles.rst
new file mode 100644
index ..1c9c0555ff05
--- /dev/null
+++ b/Documentation/security/launch-integrity/principles.rst
@@ -0,0 +1,317 @@
+.. SPDX-License-Identifier: GPL-2.0
+.. Copyright © 2019-2024 Daniel P. Smith 
+
+===
+System Launch Integrity
+===
+
+:Author: Daniel P. Smith
+:Date: August 2024
+
+This document serves to establish a common understanding of what a system
+launch is, the integrity concern for system launch, and why using a Root of 
Trust
+(RoT) from a Dynamic Launch may be desirable. Throughout this document,
+terminology from the Trusted Computing Group (TCG) and National Institute for
+Science and Technology (NIST) is used to ensure that vendor natural language is
+used to describe and reference security-related concepts.
+
+System Launch
+=
+
+There is a tendency to only consider the classical power-on boot as the only
+means to launch an Operating System (OS) on a computer system. In fact, most
+modern processors support two system launch methods. To provide clarity,
+it is important to establish a common definition of a system launch: during
+a single power life cycle of a system, a system launch consists of an 
initialization
+event, typically in hardware, that is followed by an executing software payload
+that takes the system from the initialized state to a running state. Driven by
+the Trusted Computing Group (TCG) architecture, modern processors are able to
+support two methods of system launch. These two methods of system launch are 
known
+as Static Launch and Dynamic Launch.
+
+Static Launch
+-
+
+Static launch is the system launch associated with the power cycle of the CPU.
+Thus, static launch refers to the classical power-on boot where the
+initialization event is the release of the CPU from reset and the system
+firmware is the software payload that brings the system up to a running state.
+Since static launch is the system launch associated with the beginning of the
+power lifecycle of a system, it is therefore a fixed, one-time system launch.
+It is because of this that static launch is referred to and thought of as being
+"static".
+
+Dynamic Launch
+--
+
+Modern CPUs architectures provides a mechanism to re-initialize the system to a
+"known good" state without requiring a power event. This re-initialization
+event is the event for a dynamic launch and is referred to as the Dynamic
+Launch Event (DLE). The DLE functions by accepting a software payload, referred
+to as the Dynamic Configuration Environment (DCE), that execution is handed to
+after the DLE is invoked. The DCE is responsible for bringing the system back
+to a running state. Since the dynamic launch is not tied to a power event like
+the static launch, this enables a dynamic launch to be initiated at any time
+and multiple times during a single power life cycle. This dynamism is the
+reasoning behind referring to this system launch as "dynamic".
+
+Because a dynamic launch can be conducted at any time during a single power
+life cycle, they are classified into one of two types: an early launch or a
+late 

[PATCH v11 03/20] x86: Secure Launch Resource Table header file

2024-09-13 Thread Ross Philipson
Introduce the Secure Launch Resource Table which forms the formal
interface between the pre and post launch code.

Signed-off-by: Ross Philipson 
---
 include/linux/slr_table.h | 276 ++
 1 file changed, 276 insertions(+)
 create mode 100644 include/linux/slr_table.h

diff --git a/include/linux/slr_table.h b/include/linux/slr_table.h
new file mode 100644
index ..a44fd6fbce23
--- /dev/null
+++ b/include/linux/slr_table.h
@@ -0,0 +1,276 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * TrenchBoot Secure Launch Resource Table
+ *
+ * The Secure Launch Resource Table is TrenchBoot project defined
+ * specfication to provide cross-architecture compatibility. See
+ * TrenchBoot Secure Launch kernel documentation for details.
+ *
+ * Copyright (c) 2024 Apertus Solutions, LLC
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLR_TABLE_H
+#define _LINUX_SLR_TABLE_H
+
+/* Put this in efi.h if it becomes a standard */
+#define SLR_TABLE_GUID EFI_GUID(0x877a9b2a, 0x0385, 
0x45d1, 0xa0, 0x34, 0x9d, 0xac, 0x9c, 0x9e, 0x56, 0x5f)
+
+/* SLR table header values */
+#define SLR_TABLE_MAGIC0x4452544d
+#define SLR_TABLE_REVISION 1
+
+/* Current revisions for the policy and UEFI config */
+#define SLR_POLICY_REVISION1
+#define SLR_UEFI_CONFIG_REVISION   1
+
+/* SLR defined architectures */
+#define SLR_INTEL_TXT  1
+#define SLR_AMD_SKINIT 2
+
+/* SLR defined bootloaders */
+#define SLR_BOOTLOADER_INVALID 0
+#define SLR_BOOTLOADER_GRUB1
+
+/* Log formats */
+#define SLR_DRTM_TPM12_LOG 1
+#define SLR_DRTM_TPM20_LOG 2
+
+/* DRTM Policy Entry Flags */
+#define SLR_POLICY_FLAG_MEASURED   0x1
+#define SLR_POLICY_IMPLICIT_SIZE   0x2
+
+/* Array Lengths */
+#define TPM_EVENT_INFO_LENGTH  32
+#define TXT_VARIABLE_MTRRS_LENGTH  32
+
+/* Tags */
+#define SLR_ENTRY_INVALID  0x
+#define SLR_ENTRY_DL_INFO  0x0001
+#define SLR_ENTRY_LOG_INFO 0x0002
+#define SLR_ENTRY_ENTRY_POLICY 0x0003
+#define SLR_ENTRY_INTEL_INFO   0x0004
+#define SLR_ENTRY_AMD_INFO 0x0005
+#define SLR_ENTRY_ARM_INFO 0x0006
+#define SLR_ENTRY_UEFI_INFO0x0007
+#define SLR_ENTRY_UEFI_CONFIG  0x0008
+#define SLR_ENTRY_END  0x
+
+/* Entity Types */
+#define SLR_ET_UNSPECIFIED 0x
+#define SLR_ET_SLRT0x0001
+#define SLR_ET_BOOT_PARAMS 0x0002
+#define SLR_ET_SETUP_DATA  0x0003
+#define SLR_ET_CMDLINE 0x0004
+#define SLR_ET_UEFI_MEMMAP 0x0005
+#define SLR_ET_RAMDISK 0x0006
+#define SLR_ET_TXT_OS2MLE  0x0010
+#define SLR_ET_UNUSED  0x
+
+#ifndef __ASSEMBLY__
+
+/*
+ * Primary Secure Launch Resource Table Header
+ */
+struct slr_table {
+   u32 magic;
+   u16 revision;
+   u16 architecture;
+   u32 size;
+   u32 max_size;
+   /* table entries */
+} __packed;
+
+/*
+ * Common SLRT Table Header
+ */
+struct slr_entry_hdr {
+   u32 tag;
+   u32 size;
+} __packed;
+
+/*
+ * Boot loader context
+ */
+struct slr_bl_context {
+   u16 bootloader;
+   u16 reserved[3];
+   u64 context;
+} __packed;
+
+/*
+ * Dynamic Launch Callback Function type
+ */
+typedef void (*dl_handler_func)(struct slr_bl_context *bl_context);
+
+/*
+ * DRTM Dynamic Launch Configuration
+ */
+struct slr_entry_dl_info {
+   struct slr_entry_hdr hdr;
+   u64 dce_size;
+   u64 dce_base;
+   u64 dlme_size;
+   u64 dlme_base;
+   u64 dlme_entry;
+   struct slr_bl_context bl_context;
+   u64 dl_handler;
+} __packed;
+
+/*
+ * TPM Log Information
+ */
+struct slr_entry_log_info {
+   struct slr_entry_hdr hdr;
+   u16 format;
+   u16 reserved;
+   u32 size;
+   u64 addr;
+} __packed;
+
+/*
+ * DRTM Measurement Entry
+ */
+struct slr_policy_entry {
+   u16 pcr;
+   u16 entity_type;
+   u16 flags;
+   u16 reserved;
+   u64 size;
+   u64 entity;
+   char evt_info[TPM_EVENT_INFO_LENGTH];
+} __packed;
+
+/*
+ * DRTM Measurement Policy
+ */
+struct slr_entry_policy {
+   struct slr_entry_hdr hdr;
+   u16 reserved[2];
+   u16 revision;
+   u16 nr_entries;
+   struct slr_policy_entry policy_entries[];
+} __packed;
+
+/*
+ * Secure Launch defined MTRR saving structures
+ */
+struct slr_txt_mtrr_pair {
+   u64 mtrr_physbase;
+   u64 mtrr_physmask;
+} __packed;
+
+struct slr_txt_mtrr_state {
+   u64 default_mem_type;
+   u64 mtrr_vcnt;
+   struct slr_txt_mtrr_pair mtrr_pair[TXT_VARIABLE_MTRRS_LENGTH];
+} __packed;
+
+/*
+ * Intel TXT Info table
+ */
+struct slr_entry_intel_info {
+   struct slr_entry_hdr hdr;
+   u64 txt_heap;
+   u64 saved_misc_enable_msr;
+   struct slr_txt_mtrr_state saved_bsp_mtrrs;
+} __packed;
+
+/*
+ * UEFI config measurement entry
+ */
+struct slr_uefi_cfg_entry {
+   u16 pcr;
+   u16 reserved;
+   u32 size;
+   u64 cfg; /* address or

Re: [PATCH v10 20/20] x86/efi: EFI stub DRTM launch support for Secure Launch

2024-08-29 Thread ross . philipson

On 8/29/24 6:28 AM, Ard Biesheuvel wrote:

On Thu, 29 Aug 2024 at 15:24, Jonathan McDowell  wrote:


On Wed, Aug 28, 2024 at 01:19:16PM -0700, ross.philip...@oracle.com wrote:

On 8/28/24 10:14 AM, Ard Biesheuvel wrote:

On Wed, 28 Aug 2024 at 19:09, kernel test robot  wrote:


Hi Ross,

kernel test robot noticed the following build warnings:

[auto build test WARNING on tip/x86/core]
[also build test WARNING on char-misc/char-misc-testing 
char-misc/char-misc-next char-misc/char-misc-linus herbert-cryptodev-2.6/master 
efi/next linus/master v6.11-rc5]
[cannot apply to herbert-crypto-2.6/master next-20240828]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://urldefense.com/v3/__https://git-scm.com/docs/git-format-patch*_base_tree_information__;Iw!!ACWV5N9M2RV99hQ!KhkZK77BXRIR4F24tKkUeIlIrdqXtUW2vcnDV74c_5BmrQBQaQ4FqcDKKv9LB3HQUocTGkrmIxuz-LAC$
 ]

url:
https://urldefense.com/v3/__https://github.com/intel-lab-lkp/linux/commits/Ross-Philipson/Documentation-x86-Secure-Launch-kernel-documentation/20240827-065225__;!!ACWV5N9M2RV99hQ!KhkZK77BXRIR4F24tKkUeIlIrdqXtUW2vcnDV74c_5BmrQBQaQ4FqcDKKv9LB3HQUocTGkrmI7Z6SQKy$
base:   tip/x86/core
patch link:
https://urldefense.com/v3/__https://lore.kernel.org/r/20240826223835.3928819-21-ross.philipson*40oracle.com__;JQ!!ACWV5N9M2RV99hQ!KhkZK77BXRIR4F24tKkUeIlIrdqXtUW2vcnDV74c_5BmrQBQaQ4FqcDKKv9LB3HQUocTGkrmIzWfs1XZ$
patch subject: [PATCH v10 20/20] x86/efi: EFI stub DRTM launch support for 
Secure Launch
config: i386-randconfig-062-20240828 
(https://urldefense.com/v3/__https://download.01.org/0day-ci/archive/20240829/202408290030.febuhhbr-...@intel.com/config__;!!ACWV5N9M2RV99hQ!KhkZK77BXRIR4F24tKkUeIlIrdqXtUW2vcnDV74c_5BmrQBQaQ4FqcDKKv9LB3HQUocTGkrmIwkYG0TY$
 )



This is a i386 32-bit build, which makes no sense: this stuff should
just declare 'depends on 64BIT'


Our config entry already has 'depends on X86_64' which in turn depends on
64BIT. I would think that would be enough. Do you think it needs an explicit
'depends on 64BIT' in our entry as well?


The error is in x86-stub.c, which is pre-existing and compiled for 32
bit as well, so you need more than a "depends" here.



Ugh, that means this is my fault - apologies. Replacing the #ifdef
with IS_ENABLED() makes the code visible to the 32-bit compiler, even
though the code is disregarded.

I'd still prefer IS_ENABLED(), but this would require the code in
question to live in a separate compilation unit (which depends on
CONFIG_SECURE_LAUNCH). If that is too fiddly, feel free to bring back
the #ifdef CONFIG_SECURE_LAUNCH here (and retain my R-b)


Thanks to both of you for the followup.

If there was a very large amount of new code here, I would consider 
making a separate compilation unit. I can't really say if this is a 
wider issue with the the EFI libstub code but if it ware broken up 
later, it would make sense to move our bits to where 64bit only code lives.


Given that it is a small block of code though, I am inclined to just go 
back to #ifdef's for now.


Thanks
Ross



Re: [PATCH v10 20/20] x86/efi: EFI stub DRTM launch support for Secure Launch

2024-08-28 Thread ross . philipson

On 8/28/24 10:14 AM, Ard Biesheuvel wrote:

On Wed, 28 Aug 2024 at 19:09, kernel test robot  wrote:


Hi Ross,

kernel test robot noticed the following build warnings:

[auto build test WARNING on tip/x86/core]
[also build test WARNING on char-misc/char-misc-testing 
char-misc/char-misc-next char-misc/char-misc-linus herbert-cryptodev-2.6/master 
efi/next linus/master v6.11-rc5]
[cannot apply to herbert-crypto-2.6/master next-20240828]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://urldefense.com/v3/__https://git-scm.com/docs/git-format-patch*_base_tree_information__;Iw!!ACWV5N9M2RV99hQ!KhkZK77BXRIR4F24tKkUeIlIrdqXtUW2vcnDV74c_5BmrQBQaQ4FqcDKKv9LB3HQUocTGkrmIxuz-LAC$
 ]

url:
https://urldefense.com/v3/__https://github.com/intel-lab-lkp/linux/commits/Ross-Philipson/Documentation-x86-Secure-Launch-kernel-documentation/20240827-065225__;!!ACWV5N9M2RV99hQ!KhkZK77BXRIR4F24tKkUeIlIrdqXtUW2vcnDV74c_5BmrQBQaQ4FqcDKKv9LB3HQUocTGkrmI7Z6SQKy$
base:   tip/x86/core
patch link:
https://urldefense.com/v3/__https://lore.kernel.org/r/20240826223835.3928819-21-ross.philipson*40oracle.com__;JQ!!ACWV5N9M2RV99hQ!KhkZK77BXRIR4F24tKkUeIlIrdqXtUW2vcnDV74c_5BmrQBQaQ4FqcDKKv9LB3HQUocTGkrmIzWfs1XZ$
patch subject: [PATCH v10 20/20] x86/efi: EFI stub DRTM launch support for 
Secure Launch
config: i386-randconfig-062-20240828 
(https://urldefense.com/v3/__https://download.01.org/0day-ci/archive/20240829/202408290030.febuhhbr-...@intel.com/config__;!!ACWV5N9M2RV99hQ!KhkZK77BXRIR4F24tKkUeIlIrdqXtUW2vcnDV74c_5BmrQBQaQ4FqcDKKv9LB3HQUocTGkrmIwkYG0TY$
 )



This is a i386 32-bit build, which makes no sense: this stuff should
just declare 'depends on 64BIT'


Our config entry already has 'depends on X86_64' which in turn depends 
on 64BIT. I would think that would be enough. Do you think it needs an 
explicit 'depends on 64BIT' in our entry as well?


Thanks
Ross





compiler: clang version 18.1.5 
(https://urldefense.com/v3/__https://github.com/llvm/llvm-project__;!!ACWV5N9M2RV99hQ!KhkZK77BXRIR4F24tKkUeIlIrdqXtUW2vcnDV74c_5BmrQBQaQ4FqcDKKv9LB3HQUocTGkrmI2SDLdTN$
  617a15a9eac96088ae5e9134248d8236e34b91b1)
reproduce (this is a W=1 build): 
(https://urldefense.com/v3/__https://download.01.org/0day-ci/archive/20240829/202408290030.febuhhbr-...@intel.com/reproduce__;!!ACWV5N9M2RV99hQ!KhkZK77BXRIR4F24tKkUeIlIrdqXtUW2vcnDV74c_5BmrQBQaQ4FqcDKKv9LB3HQUocTGkrmI5MJDdIG$
 )

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot 
| Closes: 
https://urldefense.com/v3/__https://lore.kernel.org/oe-kbuild-all/202408290030.febuhhbr-...@intel.com/__;!!ACWV5N9M2RV99hQ!KhkZK77BXRIR4F24tKkUeIlIrdqXtUW2vcnDV74c_5BmrQBQaQ4FqcDKKv9LB3HQUocTGkrmI-MitiqR$

sparse warnings: (new ones prefixed by >>)

drivers/firmware/efi/libstub/x86-stub.c:945:41: sparse: sparse: non 
size-preserving pointer to integer cast

drivers/firmware/efi/libstub/x86-stub.c:953:65: sparse: sparse: non 
size-preserving pointer to integer cast

drivers/firmware/efi/libstub/x86-stub.c:980:70: sparse: sparse: non 
size-preserving integer to pointer cast

drivers/firmware/efi/libstub/x86-stub.c:1014:45: sparse: sparse: non 
size-preserving integer to pointer cast

vim +945 drivers/firmware/efi/libstub/x86-stub.c

927
928  static bool efi_secure_launch_update_boot_params(struct slr_table 
*slrt,
929   struct boot_params 
*boot_params)
930  {
931  struct slr_entry_intel_info *txt_info;
932  struct slr_entry_policy *policy;
933  struct txt_os_mle_data *os_mle;
934  bool updated = false;
935  int i;
936
937  txt_info = slr_next_entry_by_tag(slrt, NULL, 
SLR_ENTRY_INTEL_INFO);
938  if (!txt_info)
939  return false;
940
941  os_mle = txt_os_mle_data_start((void *)txt_info->txt_heap);
942  if (!os_mle)
943  return false;
944
  > 945  os_mle->boot_params_addr = (u64)boot_params;
946
947  policy = slr_next_entry_by_tag(slrt, NULL, 
SLR_ENTRY_ENTRY_POLICY);
948  if (!policy)
949  return false;
950
951  for (i = 0; i < policy->nr_entries; i++) {
952  if (policy->policy_entries[i].entity_type == 
SLR_ET_BOOT_PARAMS) {
953  policy->policy_entries[i].entity = 
(u64)boot_params;
954  updated = true;
955  break;
956  }
957  }
958
959  /*
960   * If this is a PE entry into EFI stub the mocked up boot 
params will
961   * be missing some of the setup header 

Re: [PATCH v9 06/19] x86: Add early SHA-1 support for Secure Launch early measurements

2024-08-28 Thread ross . philipson

On 8/27/24 11:14 AM, 'Eric Biggers' via trenchboot-devel wrote:

On Thu, May 30, 2024 at 07:16:56PM -0700, Eric Biggers wrote:

On Thu, May 30, 2024 at 06:03:18PM -0700, Ross Philipson wrote:

From: "Daniel P. Smith" 

For better or worse, Secure Launch needs SHA-1 and SHA-256. The
choice of hashes used lie with the platform firmware, not with
software, and is often outside of the users control.

Even if we'd prefer to use SHA-256-only, if firmware elected to start us
with the SHA-1 and SHA-256 backs active, we still need SHA-1 to parse
the TPM event log thus far, and deliberately cap the SHA-1 PCRs in order
to safely use SHA-256 for everything else.

The SHA-1 code here has its origins in the code from the main kernel:

commit c4d5b9ffa31f ("crypto: sha1 - implement base layer for SHA-1")

A modified version of this code was introduced to the lib/crypto/sha1.c
to bring it in line with the SHA-256 code and allow it to be pulled into the
setup kernel in the same manner as SHA-256 is.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 


Thanks.  This explanation doesn't seem to have made it into the actual code or
documentation.  Can you please get it into a more permanent location?


I see that a new version of the patchset was sent out but this suggestion was
not taken.  Are you planning to address it?


Sorry we sort of overlooked that part of the request. We will take the 
latest commit message, clean it up a little and put it in 
boot/compressed/sha1.c file as a comment. I believe that is what you 
would like us to do.


Thanks
Ross



- Eric






Re: [PATCH v10 20/20] x86/efi: EFI stub DRTM launch support for Secure Launch

2024-08-27 Thread ross . philipson

On 8/27/24 3:28 AM, Ard Biesheuvel wrote:

On Tue, 27 Aug 2024 at 00:44, Ross Philipson  wrote:


This support allows the DRTM launch to be initiated after an EFI stub
launch of the Linux kernel is done. This is accomplished by providing
a handler to jump to when a Secure Launch is in progress. This has to be
called after the EFI stub does Exit Boot Services.

Signed-off-by: Ross Philipson 


Reviewed-by: Ard Biesheuvel 


Thank you for the two reviews
Ross




---
  drivers/firmware/efi/libstub/efistub.h  |  8 ++
  drivers/firmware/efi/libstub/x86-stub.c | 98 +
  2 files changed, 106 insertions(+)

diff --git a/drivers/firmware/efi/libstub/efistub.h 
b/drivers/firmware/efi/libstub/efistub.h
index d33ccbc4a2c6..baf42d6d0796 100644
--- a/drivers/firmware/efi/libstub/efistub.h
+++ b/drivers/firmware/efi/libstub/efistub.h
@@ -135,6 +135,14 @@ void efi_set_u64_split(u64 data, u32 *lo, u32 *hi)
 *hi = upper_32_bits(data);
  }

+static inline
+void efi_set_u64_form(u32 lo, u32 hi, u64 *data)
+{
+   u64 upper = hi;
+
+   *data = lo | upper << 32;
+}
+
  /*
   * Allocation types for calls to boottime->allocate_pages.
   */
diff --git a/drivers/firmware/efi/libstub/x86-stub.c 
b/drivers/firmware/efi/libstub/x86-stub.c
index f8e465da344d..04786c1b3b5d 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -9,6 +9,8 @@
  #include 
  #include 
  #include 
+#include 
+#include 

  #include 
  #include 
@@ -923,6 +925,99 @@ static efi_status_t efi_decompress_kernel(unsigned long 
*kernel_entry)
 return efi_adjust_memory_range_protection(addr, kernel_text_size);
  }

+static bool efi_secure_launch_update_boot_params(struct slr_table *slrt,
+struct boot_params 
*boot_params)
+{
+   struct slr_entry_intel_info *txt_info;
+   struct slr_entry_policy *policy;
+   struct txt_os_mle_data *os_mle;
+   bool updated = false;
+   int i;
+
+   txt_info = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
+   if (!txt_info)
+   return false;
+
+   os_mle = txt_os_mle_data_start((void *)txt_info->txt_heap);
+   if (!os_mle)
+   return false;
+
+   os_mle->boot_params_addr = (u64)boot_params;
+
+   policy = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_ENTRY_POLICY);
+   if (!policy)
+   return false;
+
+   for (i = 0; i < policy->nr_entries; i++) {
+   if (policy->policy_entries[i].entity_type == 
SLR_ET_BOOT_PARAMS) {
+   policy->policy_entries[i].entity = (u64)boot_params;
+   updated = true;
+   break;
+   }
+   }
+
+   /*
+* If this is a PE entry into EFI stub the mocked up boot params will
+* be missing some of the setup header data needed for the second stage
+* of the Secure Launch boot.
+*/
+   if (image) {
+   struct setup_header *hdr = (struct setup_header *)((u8 
*)image->image_base +
+   offsetof(struct boot_params, hdr));
+   u64 cmdline_ptr;
+
+   boot_params->hdr.setup_sects = hdr->setup_sects;
+   boot_params->hdr.syssize = hdr->syssize;
+   boot_params->hdr.version = hdr->version;
+   boot_params->hdr.loadflags = hdr->loadflags;
+   boot_params->hdr.kernel_alignment = hdr->kernel_alignment;
+   boot_params->hdr.min_alignment = hdr->min_alignment;
+   boot_params->hdr.xloadflags = hdr->xloadflags;
+   boot_params->hdr.init_size = hdr->init_size;
+   boot_params->hdr.kernel_info_offset = hdr->kernel_info_offset;
+   efi_set_u64_form(boot_params->hdr.cmd_line_ptr, 
boot_params->ext_cmd_line_ptr,
+&cmdline_ptr);
+   boot_params->hdr.cmdline_size = strlen((const char 
*)cmdline_ptr);
+   }
+
+   return updated;
+}
+
+static void efi_secure_launch(struct boot_params *boot_params)
+{
+   struct slr_entry_dl_info *dlinfo;
+   efi_guid_t guid = SLR_TABLE_GUID;
+   dl_handler_func handler_callback;
+   struct slr_table *slrt;
+
+   if (!IS_ENABLED(CONFIG_SECURE_LAUNCH))
+   return;
+
+   /*
+* The presence of this table indicated a Secure Launch
+* is being requested.
+*/
+   slrt = (struct slr_table *)get_efi_config_table(guid);
+   if (!slrt || slrt->magic != SLR_TABLE_MAGIC)
+   return;
+
+   /*
+* Since the EFI stub library creates its own boot_params on entry, the
+* SLRT and TXT heap have to be updated with this version.
+*/
+   if (!efi_secure_launch_update_boot_params(slrt, boot_params))
+   return;
+
+   

[PATCH v10 04/20] x86: Secure Launch main header file

2024-08-26 Thread Ross Philipson
Introduce the main Secure Launch header file used in the early SL stub
and the early setup code.

Signed-off-by: Ross Philipson 
---
 include/linux/slaunch.h | 548 
 1 file changed, 548 insertions(+)
 create mode 100644 include/linux/slaunch.h

diff --git a/include/linux/slaunch.h b/include/linux/slaunch.h
new file mode 100644
index ..efb1235b3e1b
--- /dev/null
+++ b/include/linux/slaunch.h
@@ -0,0 +1,548 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Main Secure Launch header file.
+ *
+ * Copyright (c) 2024 Apertus Solutions, LLC
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLAUNCH_H
+#define _LINUX_SLAUNCH_H
+
+/*
+ * Secure Launch Defined State Flags
+ */
+#define SL_FLAG_ACTIVE 0x0001
+#define SL_FLAG_ARCH_TXT   0x0002
+
+/*
+ * Secure Launch CPU Type
+ */
+#define SL_CPU_INTEL   1
+
+#define __SL32_CS  0x0008
+#define __SL32_DS  0x0010
+
+/*
+ * Intel Safer Mode Extensions (SMX)
+ *
+ * Intel SMX provides a programming interface to establish a Measured Launched
+ * Environment (MLE). The measurement and protection mechanisms supported by 
the
+ * capabilities of an Intel Trusted Execution Technology (TXT) platform. SMX is
+ * the processor’s programming interface in an Intel TXT platform.
+ *
+ * See:
+ *   Intel SDM Volume 2 - 6.1 "Safer Mode Extensions Reference"
+ *   Intel Trusted Execution Technology - Measured Launch Environment 
Developer’s Guide
+ */
+
+/*
+ * SMX GETSEC Leaf Functions
+ */
+#define SMX_X86_GETSEC_SEXIT   5
+#define SMX_X86_GETSEC_SMCTRL  7
+#define SMX_X86_GETSEC_WAKEUP  8
+
+/*
+ * Intel Trusted Execution Technology MMIO Registers Banks
+ */
+#define TXT_PUB_CONFIG_REGS_BASE   0xfed3
+#define TXT_PRIV_CONFIG_REGS_BASE  0xfed2
+#define TXT_NR_CONFIG_PAGES ((TXT_PUB_CONFIG_REGS_BASE - \
+ TXT_PRIV_CONFIG_REGS_BASE) >> PAGE_SHIFT)
+
+/*
+ * Intel Trusted Execution Technology (TXT) Registers
+ */
+#define TXT_CR_STS 0x
+#define TXT_CR_ESTS0x0008
+#define TXT_CR_ERRORCODE   0x0030
+#define TXT_CR_CMD_RESET   0x0038
+#define TXT_CR_CMD_CLOSE_PRIVATE   0x0048
+#define TXT_CR_DIDVID  0x0110
+#define TXT_CR_VER_EMIF0x0200
+#define TXT_CR_CMD_UNLOCK_MEM_CONFIG   0x0218
+#define TXT_CR_SINIT_BASE  0x0270
+#define TXT_CR_SINIT_SIZE  0x0278
+#define TXT_CR_MLE_JOIN0x0290
+#define TXT_CR_HEAP_BASE   0x0300
+#define TXT_CR_HEAP_SIZE   0x0308
+#define TXT_CR_SCRATCHPAD  0x0378
+#define TXT_CR_CMD_OPEN_LOCALITY1  0x0380
+#define TXT_CR_CMD_CLOSE_LOCALITY1 0x0388
+#define TXT_CR_CMD_OPEN_LOCALITY2  0x0390
+#define TXT_CR_CMD_CLOSE_LOCALITY2 0x0398
+#define TXT_CR_CMD_SECRETS 0x08e0
+#define TXT_CR_CMD_NO_SECRETS  0x08e8
+#define TXT_CR_E2STS   0x08f0
+
+/* TXT default register value */
+#define TXT_REGVALUE_ONE   0x1ULL
+
+/* TXTCR_STS status bits */
+#define TXT_SENTER_DONE_STSBIT(0)
+#define TXT_SEXIT_DONE_STS BIT(1)
+
+/*
+ * SINIT/MLE Capabilities Field Bit Definitions
+ */
+#define TXT_SINIT_MLE_CAP_WAKE_GETSEC  0
+#define TXT_SINIT_MLE_CAP_WAKE_MONITOR 1
+
+/*
+ * OS/MLE Secure Launch Specific Definitions
+ */
+#define TXT_OS_MLE_STRUCT_VERSION  1
+#define TXT_OS_MLE_MAX_VARIABLE_MTRRS  32
+
+/*
+ * TXT Heap Table Enumeration
+ */
+#define TXT_BIOS_DATA_TABLE1
+#define TXT_OS_MLE_DATA_TABLE  2
+#define TXT_OS_SINIT_DATA_TABLE3
+#define TXT_SINIT_MLE_DATA_TABLE   4
+#define TXT_SINIT_TABLE_MAXTXT_SINIT_MLE_DATA_TABLE
+
+/*
+ * Secure Launch Defined Error Codes used in MLE-initiated TXT resets.
+ *
+ * TXT Specification
+ * Appendix I ACM Error Codes
+ */
+#define SL_ERROR_GENERIC   0xc0008001
+#define SL_ERROR_TPM_INIT  0xc0008002
+#define SL_ERROR_TPM_INVALID_LOG20 0xc0008003
+#define SL_ERROR_TPM_LOGGING_FAILED0xc0008004
+#define SL_ERROR_REGION_STRADDLE_4GB   0xc0008005
+#define SL_ERROR_TPM_EXTEND0xc0008006
+#define SL_ERROR_MTRR_INV_VCNT 0xc0008007
+#define SL_ERROR_MTRR_INV_DEF_TYPE 0xc0008008
+#define SL_ERROR_MTRR_INV_BASE 0xc0008009
+#define SL_ERROR_MTRR_INV_MASK 0xc000800a
+#define SL_ERROR_MSR_INV_MISC_EN   0xc000800b
+#define SL_ERROR_INV_AP_INTERRUPT  0xc000800c
+#define SL_ERROR_INTEGER_OVERFLOW  0xc000800d
+#define SL_ERROR_HEAP_WALK 0xc000800e
+#define SL_ERROR_HEAP_MAP  0xc000800f
+#define SL_ERROR_REGION_ABOVE_4GB  0xc0008010
+#define SL_ERROR_HEAP_INVALID_DMAR 0xc0008011
+#define SL_ERROR_HEAP_DMAR_SIZE0xc0008012
+#define SL_ERROR_HEAP_DMAR_MAP 0xc0008013
+#define SL_ERROR_HI_PMR_BASE   0xc

[PATCH v10 00/20] x86: Trenchboot secure dynamic launch Linux kernel support

2024-08-26 Thread Ross Philipson
The larger focus of the TrenchBoot project (https://github.com/TrenchBoot) is to
enhance the boot security and integrity in a unified manner. The first area of
focus has been on the Trusted Computing Group's Dynamic Launch for establishing
a hardware Root of Trust for Measurement, also know as DRTM (Dynamic Root of
Trust for Measurement). The project has been and continues to work on providing
a unified means to Dynamic Launch that is a cross-platform (Intel and AMD) and
cross-architecture (x86 and Arm), with our recent involvment in the upcoming
Arm DRTM specification. The order of introducing DRTM to the Linux kernel
follows the maturity of DRTM in the architectures. Intel's Trusted eXecution
Technology (TXT) is present today and only requires a preamble loader, e.g. a
boot loader, and an OS kernel that is TXT-aware. AMD DRTM implementation has
been present since the introduction of AMD-V but requires an additional
component that is AMD specific and referred to in the specification as the
Secure Loader, which the TrenchBoot project has an active prototype in
development. Finally Arm's implementation is in specification development stage
and the project is looking to support it when it becomes available.

This patchset provides detailed documentation of DRTM, the approach used for
adding the capbility, and relevant API/ABI documentation. In addition to the
documentation the patch set introduces Intel TXT support as the first platform
for Linux Secure Launch.

A quick note on terminology. The larger open source project itself is called
TrenchBoot, which is hosted on Github (links below). The kernel feature enabling
the use of Dynamic Launch technology is referred to as "Secure Launch" within
the kernel code. As such the prefixes sl_/SL_ or slaunch/SLAUNCH will be seen
in the code. The stub code discussed above is referred to as the SL stub.

Links:

The TrenchBoot project including documentation:

https://trenchboot.org

The TrenchBoot project on Github:

https://github.com/trenchboot

Intel TXT is documented in its own specification and in the SDM Instruction Set 
volume:

https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
https://software.intel.com/en-us/articles/intel-sdm

AMD SKINIT is documented in the System Programming manual:

https://www.amd.com/system/files/TechDocs/24593.pdf

The TrenchBoot project provides a quick start guide to help get a system
up and running with Secure Launch for Linux:

https://github.com/TrenchBoot/documentation/blob/master/QUICKSTART.md

Patch set based on commit:

torvalds/master/b311c1b497e51a628aa89e7cb954481e5f9dced2

Thanks
Ross Philipson and Daniel P. Smith

Changes in v2:

 - Modified 32b entry code to prevent causing relocations in the compressed
   kernel.
 - Dropped patches for compressed kernel TPM PCR extender.
 - Modified event log code to insert log delimiter events and not rely
   on TPM access.
 - Stop extending PCRs in the early Secure Launch stub code.
 - Removed Kconfig options for hash algorithms and use the algorithms the
   ACM used.
 - Match Secure Launch measurement algorithm use to those reported in the
   TPM 2.0 event log.
 - Read the TPM events out of the TPM and extend them into the PCRs using
   the mainline TPM driver. This is done in the late initcall module.
 - Allow use of alternate PCR 19 and 20 for post ACM measurements.
 - Add Kconfig constraints needed by Secure Launch (disable KASLR
   and add x2apic dependency).
 - Fix testing of SL_FLAGS when determining if Secure Launch is active
   and the architecture is TXT.
 - Use SYM_DATA_START_LOCAL macros in early entry point code.
 - Security audit changes:
   - Validate buffers passed to MLE do not overlap the MLE and are
 properly laid out.
   - Validate buffers and memory regions used by the MLE are
 protected by IOMMU PMRs.
 - Force IOMMU to not use passthrough mode during a Secure Launch.
 - Prevent KASLR use during a Secure Launch.

Changes in v3:

 - Introduce x86 documentation patch to provide background, overview
   and configuration/ABI information for the Secure Launch kernel
   feature.
 - Remove the IOMMU patch with special cases for disabling IOMMU
   passthrough. Configuring the IOMMU is now a documentation matter
   in the previously mentioned new patch.
 - Remove special case KASLR disabling code. Configuring KASLR is now
   a documentation matter in the previously mentioned new patch.
 - Fix incorrect panic on TXT public register read.
 - Properly handle and measure setup_indirect bootparams in the early
   launch code.
 - Use correct compressed kernel image base address when testing buffers
   in the early launch stub code. This bug was introduced by the changes
   to avoid relocation in the compressed kernel.
 - Use CPUID feature bits instead of CPUID vendor strings to determine
   if SMX mode is supported and the system is Intel.
 - Remove early NMI re-enable on the BSP. This

[PATCH v10 20/20] x86/efi: EFI stub DRTM launch support for Secure Launch

2024-08-26 Thread Ross Philipson
This support allows the DRTM launch to be initiated after an EFI stub
launch of the Linux kernel is done. This is accomplished by providing
a handler to jump to when a Secure Launch is in progress. This has to be
called after the EFI stub does Exit Boot Services.

Signed-off-by: Ross Philipson 
---
 drivers/firmware/efi/libstub/efistub.h  |  8 ++
 drivers/firmware/efi/libstub/x86-stub.c | 98 +
 2 files changed, 106 insertions(+)

diff --git a/drivers/firmware/efi/libstub/efistub.h 
b/drivers/firmware/efi/libstub/efistub.h
index d33ccbc4a2c6..baf42d6d0796 100644
--- a/drivers/firmware/efi/libstub/efistub.h
+++ b/drivers/firmware/efi/libstub/efistub.h
@@ -135,6 +135,14 @@ void efi_set_u64_split(u64 data, u32 *lo, u32 *hi)
*hi = upper_32_bits(data);
 }
 
+static inline
+void efi_set_u64_form(u32 lo, u32 hi, u64 *data)
+{
+   u64 upper = hi;
+
+   *data = lo | upper << 32;
+}
+
 /*
  * Allocation types for calls to boottime->allocate_pages.
  */
diff --git a/drivers/firmware/efi/libstub/x86-stub.c 
b/drivers/firmware/efi/libstub/x86-stub.c
index f8e465da344d..04786c1b3b5d 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -9,6 +9,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #include 
 #include 
@@ -923,6 +925,99 @@ static efi_status_t efi_decompress_kernel(unsigned long 
*kernel_entry)
return efi_adjust_memory_range_protection(addr, kernel_text_size);
 }
 
+static bool efi_secure_launch_update_boot_params(struct slr_table *slrt,
+struct boot_params 
*boot_params)
+{
+   struct slr_entry_intel_info *txt_info;
+   struct slr_entry_policy *policy;
+   struct txt_os_mle_data *os_mle;
+   bool updated = false;
+   int i;
+
+   txt_info = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
+   if (!txt_info)
+   return false;
+
+   os_mle = txt_os_mle_data_start((void *)txt_info->txt_heap);
+   if (!os_mle)
+   return false;
+
+   os_mle->boot_params_addr = (u64)boot_params;
+
+   policy = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_ENTRY_POLICY);
+   if (!policy)
+   return false;
+
+   for (i = 0; i < policy->nr_entries; i++) {
+   if (policy->policy_entries[i].entity_type == 
SLR_ET_BOOT_PARAMS) {
+   policy->policy_entries[i].entity = (u64)boot_params;
+   updated = true;
+   break;
+   }
+   }
+
+   /*
+* If this is a PE entry into EFI stub the mocked up boot params will
+* be missing some of the setup header data needed for the second stage
+* of the Secure Launch boot.
+*/
+   if (image) {
+   struct setup_header *hdr = (struct setup_header *)((u8 
*)image->image_base +
+   offsetof(struct boot_params, hdr));
+   u64 cmdline_ptr;
+
+   boot_params->hdr.setup_sects = hdr->setup_sects;
+   boot_params->hdr.syssize = hdr->syssize;
+   boot_params->hdr.version = hdr->version;
+   boot_params->hdr.loadflags = hdr->loadflags;
+   boot_params->hdr.kernel_alignment = hdr->kernel_alignment;
+   boot_params->hdr.min_alignment = hdr->min_alignment;
+   boot_params->hdr.xloadflags = hdr->xloadflags;
+   boot_params->hdr.init_size = hdr->init_size;
+   boot_params->hdr.kernel_info_offset = hdr->kernel_info_offset;
+   efi_set_u64_form(boot_params->hdr.cmd_line_ptr, 
boot_params->ext_cmd_line_ptr,
+&cmdline_ptr);
+   boot_params->hdr.cmdline_size = strlen((const char 
*)cmdline_ptr);
+   }
+
+   return updated;
+}
+
+static void efi_secure_launch(struct boot_params *boot_params)
+{
+   struct slr_entry_dl_info *dlinfo;
+   efi_guid_t guid = SLR_TABLE_GUID;
+   dl_handler_func handler_callback;
+   struct slr_table *slrt;
+
+   if (!IS_ENABLED(CONFIG_SECURE_LAUNCH))
+   return;
+
+   /*
+* The presence of this table indicated a Secure Launch
+* is being requested.
+*/
+   slrt = (struct slr_table *)get_efi_config_table(guid);
+   if (!slrt || slrt->magic != SLR_TABLE_MAGIC)
+   return;
+
+   /*
+* Since the EFI stub library creates its own boot_params on entry, the
+* SLRT and TXT heap have to be updated with this version.
+*/
+   if (!efi_secure_launch_update_boot_params(slrt, boot_params))
+   return;
+
+   /* Jump through DL stub to initiate Secure Launch */
+   dlinfo = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);
+
+   handler_callback = (dl_handler_func)dlinfo->

[PATCH v10 19/20] x86: Secure Launch late initcall platform module

2024-08-26 Thread Ross Philipson
From: "Daniel P. Smith" 

The Secure Launch platform module is a late init module. During the
init call, the TPM event log is read and measurements taken in the
early boot stub code are located. These measurements are extended
into the TPM PCRs using the mainline TPM kernel driver.

The platform module also registers the securityfs nodes to allow
access to TXT register fields on Intel along with the fetching of
and writing events to the late launch TPM log.

Signed-off-by: Daniel P. Smith 
Signed-off-by: garnetgrimm 
Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/slmodule.c | 509 +
 2 files changed, 510 insertions(+)
 create mode 100644 arch/x86/kernel/slmodule.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index a18a8239bde5..6028903d6661 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -73,6 +73,7 @@ obj-$(CONFIG_IA32_EMULATION)  += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
 obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slmodule.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-y  += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/slmodule.c b/arch/x86/kernel/slmodule.c
new file mode 100644
index ..46a8f86ea061
--- /dev/null
+++ b/arch/x86/kernel/slmodule.c
@@ -0,0 +1,509 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup, securityfs exposure and finalization.
+ *
+ * Copyright (c) 2024 Apertus Solutions, LLC
+ * Copyright (c) 2024 Assured Information Security, Inc.
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+/*
+ * The macro DECLARE_TXT_PUB_READ_U is used to read values from the TXT
+ * public registers as unsigned values.
+ */
+#define DECLARE_TXT_PUB_READ_U(size, fmt, msg_size)\
+static ssize_t txt_pub_read_u##size(unsigned int offset,   \
+   loff_t *read_offset,\
+   size_t read_len,\
+   char __user *buf)   \
+{  \
+   char msg_buffer[msg_size];  \
+   u##size reg_value = 0;  \
+   void __iomem *txt;  \
+   \
+   txt = ioremap(TXT_PUB_CONFIG_REGS_BASE, \
+   TXT_NR_CONFIG_PAGES * PAGE_SIZE);   \
+   if (!txt)   \
+   return -EFAULT; \
+   memcpy_fromio(®_value, txt + offset, sizeof(u##size));   \
+   iounmap(txt);   \
+   snprintf(msg_buffer, msg_size, fmt, reg_value); \
+   return simple_read_from_buffer(buf, read_len, read_offset,  \
+   &msg_buffer, msg_size); \
+}
+
+DECLARE_TXT_PUB_READ_U(8, "%#04x\n", 6);
+DECLARE_TXT_PUB_READ_U(32, "%#010x\n", 12);
+DECLARE_TXT_PUB_READ_U(64, "%#018llx\n", 20);
+
+#define DECLARE_TXT_FOPS(reg_name, reg_offset, reg_size)   \
+static ssize_t txt_##reg_name##_read(struct file *flip,
\
+   char __user *buf, size_t read_len, loff_t *read_offset) \
+{  \
+   return txt_pub_read_u##reg_size(reg_offset, read_offset,\
+   read_len, buf); \
+}  \
+static const struct file_operations reg_name##_ops = { \
+   .read = txt_##reg_name##_read,  \
+}
+
+DECLARE_TXT_FOPS(sts, TXT_CR_STS, 64);
+DECLARE_TXT_FOPS(ests, TXT_CR_ESTS, 8);
+DECLARE_TXT_FOPS(errorcode, TXT_CR_ERRORCODE, 32);
+DECLARE_TXT_FOPS(didvid, TXT_CR_DIDVID, 64);
+DECLARE_TXT_FOPS(e2sts, TXT_CR_E2STS, 64);
+DECLARE_TXT_FOPS(ver_emif, TXT_CR_VER_EMIF, 32);
+DECLARE_TXT_FOPS(scratchpad, TXT_CR_SCRATCHPAD, 64);
+
+/*
+ * Securityfs exposure
+ */
+struct memfile {
+   char *name;
+   void *addr;
+   size_t size;
+};
+
+static struct memfile sl_evtlog = {"eventlog", NULL, 0};
+static void *txt_heap;
+static struct txt_heap_event_log_pointer2_1_element *evtlog21;
+static DEFINE_MUTEX

[PATCH v10 18/20] tpm: Add sysfs interface to allow setting and querying the default locality

2024-08-26 Thread Ross Philipson
Expose a sysfs interface to allow user mode to set and query the default
locality set for the TPM chip.

Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm-sysfs.c | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/drivers/char/tpm/tpm-sysfs.c b/drivers/char/tpm/tpm-sysfs.c
index 94231f052ea7..185a2f57d4cb 100644
--- a/drivers/char/tpm/tpm-sysfs.c
+++ b/drivers/char/tpm/tpm-sysfs.c
@@ -324,6 +324,34 @@ static ssize_t null_name_show(struct device *dev, struct 
device_attribute *attr,
 static DEVICE_ATTR_RO(null_name);
 #endif
 
+static ssize_t default_locality_show(struct device *dev,
+struct device_attribute *attr, char *buf)
+{
+   struct tpm_chip *chip = to_tpm_chip(dev);
+
+   return sprintf(buf, "%d\n", chip->default_locality);
+}
+
+static ssize_t default_locality_store(struct device *dev, struct 
device_attribute *attr,
+ const char *buf, size_t count)
+{
+   struct tpm_chip *chip = to_tpm_chip(dev);
+   unsigned int locality;
+
+   if (kstrtouint(buf, 0, &locality))
+   return -ERANGE;
+
+   if (locality >= TPM_MAX_LOCALITY)
+   return -ERANGE;
+
+   if (tpm_chip_set_default_locality(chip, (int)locality))
+   return count;
+   else
+   return 0;
+}
+
+static DEVICE_ATTR_RW(default_locality);
+
 static struct attribute *tpm1_dev_attrs[] = {
&dev_attr_pubek.attr,
&dev_attr_pcrs.attr,
@@ -336,6 +364,7 @@ static struct attribute *tpm1_dev_attrs[] = {
&dev_attr_durations.attr,
&dev_attr_timeouts.attr,
&dev_attr_tpm_version_major.attr,
+   &dev_attr_default_locality.attr,
NULL,
 };
 
@@ -344,6 +373,7 @@ static struct attribute *tpm2_dev_attrs[] = {
 #ifdef CONFIG_TCG_TPM2_HMAC
&dev_attr_null_name.attr,
 #endif
+   &dev_attr_default_locality.attr,
NULL
 };
 
-- 
2.39.3




[PATCH v10 17/20] tpm: Add ability to set the default locality the TPM chip uses

2024-08-26 Thread Ross Philipson
Curently the locality is hard coded to 0 but for DRTM support, access
is needed to localities 1 through 4.

Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm-chip.c | 24 +++-
 include/linux/tpm.h |  4 
 2 files changed, 27 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index 854546000c92..1ca390a742ed 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -44,7 +44,7 @@ static int tpm_request_locality(struct tpm_chip *chip)
if (!chip->ops->request_locality)
return 0;
 
-   rc = chip->ops->request_locality(chip, 0);
+   rc = chip->ops->request_locality(chip, chip->default_locality);
if (rc < 0)
return rc;
 
@@ -143,6 +143,27 @@ void tpm_chip_stop(struct tpm_chip *chip)
 }
 EXPORT_SYMBOL_GPL(tpm_chip_stop);
 
+/**
+ * tpm_chip_set_default_locality() - set the TPM chip default locality to open
+ * @chip:  a TPM chip to use
+ * @locality:   the default locality to set
+ *
+ * Return:
+ * * true  - Preferred locality set
+ * * false - Invalid locality specified
+ */
+bool tpm_chip_set_default_locality(struct tpm_chip *chip, int locality)
+{
+   if (locality < 0 || locality >=TPM_MAX_LOCALITY)
+   return false;
+
+   mutex_lock(&chip->tpm_mutex);
+   chip->default_locality = locality;
+   mutex_unlock(&chip->tpm_mutex);
+   return true;
+}
+EXPORT_SYMBOL_GPL(tpm_chip_set_default_locality);
+
 /**
  * tpm_try_get_ops() - Get a ref to the tpm_chip
  * @chip: Chip to ref
@@ -374,6 +395,7 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
}
 
chip->locality = -1;
+   chip->default_locality = 0;
return chip;
 
 out:
diff --git a/include/linux/tpm.h b/include/linux/tpm.h
index 98f2c7c1c52e..83e94b2f0cef 100644
--- a/include/linux/tpm.h
+++ b/include/linux/tpm.h
@@ -219,6 +219,9 @@ struct tpm_chip {
u8 null_ec_key_y[EC_PT_SZ];
struct tpm2_auth *auth;
 #endif
+
+   /* preferred locality - default 0 */
+   int default_locality;
 };
 
 #define TPM_HEADER_SIZE10
@@ -446,6 +449,7 @@ static inline u32 tpm2_rc_value(u32 rc)
 extern int tpm_is_tpm2(struct tpm_chip *chip);
 extern __must_check int tpm_try_get_ops(struct tpm_chip *chip);
 extern void tpm_put_ops(struct tpm_chip *chip);
+extern bool tpm_chip_set_default_locality(struct tpm_chip *chip, int locality);
 extern ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_buf *buf,
size_t min_rsp_body_length, const char *desc);
 extern int tpm_pcr_read(struct tpm_chip *chip, u32 pcr_idx,
-- 
2.39.3




[PATCH v10 16/20] tpm: Make locality requests return consistent values

2024-08-26 Thread Ross Philipson
From: "Daniel P. Smith" 

The function tpm_tis_request_locality() is expected to return the locality
value that was requested, or a negative error code upon failure. If it is called
while locality_count of struct tis_data is non-zero, no actual locality request
will be sent. Because the ret variable is initially set to 0, the
locality_count will still get increased, and the function will return 0. For a
caller, this would indicate that locality 0 was successfully requested and not
the state changes just mentioned.

Additionally, the function __tpm_tis_request_locality() provides inconsistent
error codes. It will provide either a failed IO write or a -1 should it have
timed out waiting for locality request to succeed.

This commit changes __tpm_tis_request_locality() to return valid negative error
codes to reflect the reason it fails. It then adjusts the return value check in
tpm_tis_request_locality() to check for a non-negative return value before
incrementing locality_cout. In addition, the initial value of the ret value is
set to a negative error to ensure the check does not pass if
__tpm_tis_request_locality() is not called.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm_tis_core.c | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
index 22ebf679ea69..20a8b341be0d 100644
--- a/drivers/char/tpm/tpm_tis_core.c
+++ b/drivers/char/tpm/tpm_tis_core.c
@@ -210,7 +210,7 @@ static int __tpm_tis_request_locality(struct tpm_chip 
*chip, int l)
 again:
timeout = stop - jiffies;
if ((long)timeout <= 0)
-   return -1;
+   return -EBUSY;
rc = wait_event_interruptible_timeout(priv->int_queue,
  (check_locality
   (chip, l)),
@@ -229,18 +229,21 @@ static int __tpm_tis_request_locality(struct tpm_chip 
*chip, int l)
tpm_msleep(TPM_TIMEOUT);
} while (time_before(jiffies, stop));
}
-   return -1;
+   return -EBUSY;
 }
 
 static int tpm_tis_request_locality(struct tpm_chip *chip, int l)
 {
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
-   int ret = 0;
+   int ret = -EBUSY;
+
+   if (l < 0 || l > TPM_MAX_LOCALITY)
+   return -EINVAL;
 
mutex_lock(&priv->locality_count_mutex);
if (priv->locality_count == 0)
ret = __tpm_tis_request_locality(chip, l);
-   if (!ret)
+   if (ret >= 0)
priv->locality_count++;
mutex_unlock(&priv->locality_count_mutex);
return ret;
-- 
2.39.3




[PATCH v10 15/20] tpm: Ensure tpm is in known state at startup

2024-08-26 Thread Ross Philipson
From: "Daniel P. Smith" 

When tis_tis_core initializes, it assumes all localities are closed. There
are cases when this may not be the case. This commit addresses this by
ensuring all localities are closed before initializing begins.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm_tis_core.c | 11 ++-
 include/linux/tpm.h |  6 ++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
index a6967f312837..22ebf679ea69 100644
--- a/drivers/char/tpm/tpm_tis_core.c
+++ b/drivers/char/tpm/tpm_tis_core.c
@@ -1107,7 +1107,7 @@ int tpm_tis_core_init(struct device *dev, struct 
tpm_tis_data *priv, int irq,
u32 intmask;
u32 clkrun_val;
u8 rid;
-   int rc, probe;
+   int rc, probe, i;
struct tpm_chip *chip;
 
chip = tpmm_chip_alloc(dev, &tpm_tis);
@@ -1169,6 +1169,15 @@ int tpm_tis_core_init(struct device *dev, struct 
tpm_tis_data *priv, int irq,
goto out_err;
}
 
+   /*
+* There are environments, for example, those that comply with the TCG 
D-RTM
+* specification that requires the TPM to be left in Locality 2.
+*/
+   for (i = 0; i <= TPM_MAX_LOCALITY; i++) {
+   if (check_locality(chip, i))
+   tpm_tis_relinquish_locality(chip, i);
+   }
+
/* Take control of the TPM's interrupt hardware and shut it off */
rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask);
if (rc < 0)
diff --git a/include/linux/tpm.h b/include/linux/tpm.h
index e93ee8d936a9..98f2c7c1c52e 100644
--- a/include/linux/tpm.h
+++ b/include/linux/tpm.h
@@ -147,6 +147,12 @@ struct tpm_chip_seqops {
  */
 #define TPM2_MAX_CONTEXT_SIZE 4096
 
+/*
+ * The maximum locality (0 - 4) for a TPM, as defined in section 3.2 of the
+ * Client Platform Profile Specification.
+ */
+#define TPM_MAX_LOCALITY   4
+
 struct tpm_chip {
struct device dev;
struct device devs;
-- 
2.39.3




[PATCH v10 14/20] tpm: Protect against locality counter underflow

2024-08-26 Thread Ross Philipson
From: "Daniel P. Smith" 

Commit 933bfc5ad213 introduced the use of a locality counter to control when a
locality request is allowed to be sent to the TPM. In the commit, the counter
is indiscriminately decremented. Thus creating a situation for an integer
underflow of the counter.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
Reported-by: Kanth Ghatraju 
---
 drivers/char/tpm/tpm_tis_core.c | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
index fdef214b9f6b..a6967f312837 100644
--- a/drivers/char/tpm/tpm_tis_core.c
+++ b/drivers/char/tpm/tpm_tis_core.c
@@ -180,7 +180,10 @@ static int tpm_tis_relinquish_locality(struct tpm_chip 
*chip, int l)
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
 
mutex_lock(&priv->locality_count_mutex);
-   priv->locality_count--;
+   if (priv->locality_count > 0)
+   priv->locality_count--;
+   else
+   pr_info("Invalid: locality count dropped below zero\n");
if (priv->locality_count == 0)
__tpm_tis_relinquish_locality(priv, l);
mutex_unlock(&priv->locality_count_mutex);
-- 
2.39.3




[PATCH v10 13/20] x86/reboot: Secure Launch SEXIT support on reboot paths

2024-08-26 Thread Ross Philipson
If the MLE kernel is being powered off, rebooted or halted,
then SEXIT must be called. Note that the SEXIT GETSEC leaf
can only be called after a machine_shutdown() has been done on
these paths. The machine_shutdown() is not called on a few paths
like when poweroff action does not have a poweroff callback (into
ACPI code) or when an emergency reset is done. In these cases,
just the TXT registers are finalized but SEXIT is skipped.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/reboot.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index 0e0a4cf6b5eb..c66e8896d516 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -13,6 +13,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -778,6 +779,7 @@ static void native_machine_restart(char *__unused)
 
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
__machine_emergency_restart(0);
 }
 
@@ -788,6 +790,9 @@ static void native_machine_halt(void)
 
tboot_shutdown(TB_SHUTDOWN_HALT);
 
+   /* SEXIT done after machine_shutdown() to meet TXT requirements */
+   slaunch_finalize(1);
+
stop_this_cpu(NULL);
 }
 
@@ -796,8 +801,12 @@ static void native_machine_power_off(void)
if (kernel_can_power_off()) {
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
do_kernel_power_off();
+   } else {
+   slaunch_finalize(0);
}
+
/* A fallback in case there is no PM info available */
tboot_shutdown(TB_SHUTDOWN_HALT);
 }
@@ -825,6 +834,7 @@ void machine_shutdown(void)
 
 void machine_emergency_restart(void)
 {
+   slaunch_finalize(0);
__machine_emergency_restart(1);
 }
 
-- 
2.39.3




[PATCH v10 12/20] kexec: Secure Launch kexec SEXIT support

2024-08-26 Thread Ross Philipson
Prior to running the next kernel via kexec, the Secure Launch code
closes down private SMX resources and does an SEXIT. This allows the
next kernel to start normally without any issues starting the APs etc.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/slaunch.c | 72 +++
 kernel/kexec_core.c   |  4 +++
 2 files changed, 76 insertions(+)

diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index 9fbe96d1ec71..d9be68325ae1 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -522,3 +522,75 @@ void __init slaunch_setup_txt(void)
 
pr_info("Intel TXT setup complete\n");
 }
+
+static inline void smx_getsec_sexit(void)
+{
+   asm volatile ("getsec\n"
+ : : "a" (SMX_X86_GETSEC_SEXIT));
+}
+
+/*
+ * Used during kexec and on reboot paths to finalize the TXT state
+ * and do an SEXIT exiting the DRTM and disabling SMX mode.
+ */
+void slaunch_finalize(int do_sexit)
+{
+   u64 one = TXT_REGVALUE_ONE, val;
+   void __iomem *config;
+
+   if (!slaunch_is_txt_launch())
+   return;
+
+   config = ioremap(TXT_PRIV_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT private reqs\n");
+   return;
+   }
+
+   /* Clear secrets bit for SEXIT */
+   memcpy_toio(config + TXT_CR_CMD_NO_SECRETS, &one, sizeof(one));
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Unlock memory configurations */
+   memcpy_toio(config + TXT_CR_CMD_UNLOCK_MEM_CONFIG, &one, sizeof(one));
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Close the TXT private register space */
+   memcpy_toio(config + TXT_CR_CMD_CLOSE_PRIVATE, &one, sizeof(one));
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   /*
+* Calls to iounmap are not being done because of the state of the
+* system this late in the kexec process. Local IRQs are disabled and
+* iounmap causes a TLB flush which in turn causes a warning. Leaving
+* thse mappings is not an issue since the next kernel is going to
+* completely re-setup memory management.
+*/
+
+   /* Map public registers and do a final read fence */
+   config = ioremap(TXT_PUB_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT public reqs\n");
+   return;
+   }
+
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   pr_emerg("TXT clear secrets bit and unlock memory complete.\n");
+
+   if (!do_sexit)
+   return;
+
+   if (smp_processor_id() != 0)
+   panic("Error TXT SEXIT must be called on CPU 0\n");
+
+   /* In case SMX mode was disabled, enable it for SEXIT */
+   cr4_set_bits(X86_CR4_SMXE);
+
+   /* Do the SEXIT SMX operation */
+   smx_getsec_sexit();
+
+   pr_info("TXT SEXIT complete.\n");
+}
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index c0caa14880c3..53d5ae8326a3 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -40,6 +40,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1045,6 +1046,9 @@ int kernel_kexec(void)
cpu_hotplug_enable();
pr_notice("Starting new kernel\n");
machine_shutdown();
+
+   /* Finalize TXT registers and do SEXIT */
+   slaunch_finalize(1);
}
 
kmsg_dump(KMSG_DUMP_SHUTDOWN);
-- 
2.39.3




[PATCH v10 09/20] x86: Secure Launch kernel early boot stub

2024-08-26 Thread Ross Philipson
The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
 Documentation/arch/x86/boot.rst   |  21 +
 arch/x86/boot/compressed/Makefile |   3 +-
 arch/x86/boot/compressed/head_64.S|  29 +
 arch/x86/boot/compressed/sl_main.c| 588 +
 arch/x86/boot/compressed/sl_stub.S| 726 ++
 arch/x86/include/uapi/asm/bootparam.h |   1 +
 arch/x86/kernel/asm-offsets.c |  20 +
 7 files changed, 1387 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/boot/compressed/sl_main.c
 create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
- If 1, KASLR enabled.
- If 0, KASLR disabled.
 
+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
   Bit 5 (write): QUIET_FLAG
 
- If 0, print early messages.
@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4
 
   This field contains maximal allowed type for setup_data and setup_indirect 
structs.
 
+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
+
 
 The Image Checksum
 ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 40dc0b9babd5..ce651eaa68dd 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -107,7 +107,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-libs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/sha1.o $(obj)/sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/sha1.o $(obj)/sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o
 
 $(obj)/vmlinux: $(vmlinux-objs-y) $(vmlinux-libs-y) FORCE
$(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..545329c97377 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
pushq   $0
popfq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   /* Ensure the relocation region is covered by a PMR */
+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq   sl_check_region
+#endif
+
 /*
  * Copy the compressed kernel to the end of our buffer
  * where decompression in place becomes safe.
@@ -462,6 +469,28 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
shrq$3, %rcx
rep stosq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   /*
+* Have to do the final early sl stub work in 64b area.
+*
+* *** NOTE ***
+*
+* Several boot params get used before we get a chance to measure
+* them in this call. This is a known issue and we currently don't
+* have a solution. The scratch field doesn't matter. There is no
+* obvious way to do anything about the use of kernel_alignment or
+* init_size though these seem low risk with all the PMR and overlap
+* checks in place.
+*/
+   movq%r15, %rdi
+  

[PATCH v10 11/20] x86: Secure Launch SMP bringup support

2024-08-26 Thread Ross Philipson
On Intel, the APs are left in a well documented state after TXT performs
the late launch. Specifically they cannot have #INIT asserted on them so
a standard startup via INIT/SIPI/SIPI cannot be performed. Instead the
early SL stub code uses MONITOR and MWAIT to park the APs. The realmode/init.c
code updates the jump address for the waiting APs with the location of the
Secure Launch entry point in the RM piggy after it is loaded and fixed up.
As the APs are woken up by writing the monitor, the APs jump to the Secure
Launch entry point in the RM piggy which mimics what the real mode code would
do then jumps to the standard RM piggy protected mode entry point.

Signed-off-by: Ross Philipson 
---
 arch/x86/include/asm/realmode.h  |  3 ++
 arch/x86/kernel/smpboot.c| 43 ++--
 arch/x86/realmode/init.c |  3 ++
 arch/x86/realmode/rm/header.S|  3 ++
 arch/x86/realmode/rm/trampoline_64.S | 32 +
 5 files changed, 82 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index 87e5482acd0d..339b48e2543d 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -38,6 +38,9 @@ struct real_mode_header {
 #ifdef CONFIG_X86_64
u32 machine_real_restart_seg;
 #endif
+#ifdef CONFIG_SECURE_LAUNCH
+   u32 sl_trampoline_start32;
+#endif
 };
 
 /* This must match data at realmode/rm/trampoline_{32,64}.S */
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 0c35207320cb..0c915e105a9b 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -60,6 +60,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -868,6 +869,41 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
return 0;
 }
 
+#ifdef CONFIG_SECURE_LAUNCH
+
+/*
+ * TXT AP startup is quite different than normal. The APs cannot have #INIT
+ * asserted on them or receive SIPIs. The early Secure Launch code has parked
+ * the APs using monitor/mwait. This will wake the APs by writing the monitor
+ * and have them jump to the protected mode code in the rmpiggy where the rest
+ * of the SMP boot of the AP will proceed normally.
+ */
+static void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+   struct sl_ap_stack_and_monitor *stack_monitor;
+   struct sl_ap_wake_info *ap_wake_info;
+
+   ap_wake_info = slaunch_get_ap_wake_info();
+
+   stack_monitor = (struct sl_ap_stack_and_monitor 
*)__va(ap_wake_info->ap_wake_block +
+  
ap_wake_info->ap_stacks_offset);
+
+   for (unsigned int i = TXT_MAX_CPUS - 1; i >= 0; i--) {
+   if (stack_monitor[i].apicid == apicid) {
+   stack_monitor[i].monitor = 1;
+   break;
+   }
+   }
+}
+
+#else
+
+static inline void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+}
+
+#endif  /* !CONFIG_SECURE_LAUNCH */
+
 /*
  * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
  * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
@@ -877,7 +913,7 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
 static int do_boot_cpu(u32 apicid, int cpu, struct task_struct *idle)
 {
unsigned long start_ip = real_mode_header->trampoline_start;
-   int ret;
+   int ret = 0;
 
 #ifdef CONFIG_X86_64
/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
@@ -922,12 +958,15 @@ static int do_boot_cpu(u32 apicid, int cpu, struct 
task_struct *idle)
 
/*
 * Wake up a CPU in difference cases:
+* - Intel TXT DRTM launch uses its own method to wake the APs
 * - Use a method from the APIC driver if one defined, with wakeup
 *   straight to 64-bit mode preferred over wakeup to RM.
 * Otherwise,
 * - Use an INIT boot APIC message
 */
-   if (apic->wakeup_secondary_cpu_64)
+   if (slaunch_is_txt_launch())
+   slaunch_wakeup_cpu_from_txt(cpu, apicid);
+   else if (apic->wakeup_secondary_cpu_64)
ret = apic->wakeup_secondary_cpu_64(apicid, start_ip);
else if (apic->wakeup_secondary_cpu)
ret = apic->wakeup_secondary_cpu(apicid, start_ip);
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index f9bc444a3064..d95776cb30d3 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -4,6 +4,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -210,6 +211,8 @@ void __init init_real_mode(void)
 
setup_real_mode();
set_real_mode_permissions();
+
+   slaunch_fixup_jump_vector();
 }
 
 static int __init do_init_real_mode(void)
diff --git a/arch/x86/realmode/rm/header.S b/arch/x86/realmode/rm/header.S
index 2eb62be6d256..3b5cbcbbfc90 100644
--- a/arch/x86/realmode/rm/header.S

[PATCH v10 10/20] x86: Secure Launch kernel late boot stub

2024-08-26 Thread Ross Philipson
The routine slaunch_setup is called out of the x86 specific setup_arch()
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.

Intel VT-d/IOMMU hardware provides special registers called Protected
Memory Regions (PMRs) that allow all memory to be protected from
DMA during a TXT DRTM launch. This coverage is validated during the
late setup process to ensure DMA protection is in place prior to
the IOMMUs being initialized and configured by the mainline kernel.
See the Intel Trusted Execution Technology - Measured Launch Environment
Developer's Guide for more details.

For TXT, this code also reserves the original compressed kernel setup
area where the APs were left looping so that this memory cannot be used.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/setup.c|   3 +
 arch/x86/kernel/slaunch.c  | 524 +
 drivers/iommu/intel/dmar.c |   4 +
 4 files changed, 532 insertions(+)
 create mode 100644 arch/x86/kernel/slaunch.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index a847180836e4..a18a8239bde5 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -72,6 +72,7 @@ obj-$(CONFIG_X86_32)  += tls.o
 obj-$(CONFIG_IA32_EMULATION)   += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-y  += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 6129dc2ba784..d915f21306aa 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -938,6 +939,8 @@ void __init setup_arch(char **cmdline_p)
early_gart_iommu_check();
 #endif
 
+   slaunch_setup_txt();
+
/*
 * partially used pages are not usable - thus
 * we are rounding upwards:
diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
new file mode 100644
index ..9fbe96d1ec71
--- /dev/null
+++ b/arch/x86/kernel/slaunch.c
@@ -0,0 +1,524 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup and finalization support.
+ *
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+static u32 sl_flags __ro_after_init;
+static struct sl_ap_wake_info ap_wake_info __ro_after_init;
+static u64 evtlog_addr __ro_after_init;
+static u32 evtlog_size __ro_after_init;
+static u64 vtd_pmr_lo_size __ro_after_init;
+
+/* This should be plenty of room */
+static u8 txt_dmar[PAGE_SIZE] __aligned(16);
+
+/*
+ * Get the Secure Launch flags that indicate what kind of launch is being done.
+ * E.g. a TXT launch is in progress or no Secure Launch is happening.
+ */
+u32 slaunch_get_flags(void)
+{
+   return sl_flags;
+}
+
+/*
+ * Return the AP wakeup information used in the SMP boot code to start up
+ * the APs that are parked using MONITOR/MWAIT.
+ */
+struct sl_ap_wake_info *slaunch_get_ap_wake_info(void)
+{
+   return &ap_wake_info;
+}
+
+/*
+ * On Intel platforms, TXT passes a safe copy of the DMAR ACPI table to the
+ * DRTM. The DRTM is supposed to use this instead of the one found in the
+ * ACPI tables.
+ */
+struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table_header 
*dmar)
+{
+   /* The DMAR is only stashed and provided via TXT on Intel systems */
+   if (memcmp(txt_dmar, "DMAR", 4))
+   return dmar;
+
+   return (struct acpi_table_header *)(txt_dmar);
+}
+
+/*
+ * If running within a TXT established DRTM, this is the proper way to reset
+ * the system if a failure occurs or a security issue is found.
+ */
+void __noreturn slaunch_txt_reset(void __iomem *txt,
+ const char *msg, u64 error)
+{
+   u64 one = 1, val;
+
+   pr_err("%s", msg);
+
+   /*
+* This performs a TXT reset with a sticky error code. The reads of
+* TXT_CR_E2STS act as barriers.
+*/
+   memcpy_toio(txt + TXT_CR_ERRORCODE, &error, sizeof(error));
+   memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_NO_SECRETS, &one, sizeof(one));
+   memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_UNLOCK_MEM_CONFIG, &one, sizeof(one));
+   memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy

[PATCH v10 08/20] x86/boot: Place TXT MLE header in the kernel_info section

2024-08-26 Thread Ross Philipson
The MLE (measured launch environment) header must be locatable by the
boot loader and TXT must be setup to do a launch with this header's
location. While the offset to the kernel_info structure does not need
to be at a fixed offset, the offsets in the header must be relative
offsets from the start of the setup kernel. The support in the linker
file achieves this.

Signed-off-by: Ross Philipson 
Suggested-by: Ard Biesheuvel 
---
 arch/x86/boot/compressed/kernel_info.S | 50 +++---
 arch/x86/boot/compressed/vmlinux.lds.S |  7 
 2 files changed, 53 insertions(+), 4 deletions(-)

diff --git a/arch/x86/boot/compressed/kernel_info.S 
b/arch/x86/boot/compressed/kernel_info.S
index f818ee8fba38..a0604a0d1756 100644
--- a/arch/x86/boot/compressed/kernel_info.S
+++ b/arch/x86/boot/compressed/kernel_info.S
@@ -1,12 +1,20 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 
+#include 
 #include 
 
-   .section ".rodata.kernel_info", "a"
+/*
+ * The kernel_info structure is not placed at a fixed offest in the
+ * kernel image. So this macro and the support in the linker file
+ * allow the relative offsets for the MLE header within the kernel
+ * image to be configured at build time.
+ */
+#define roffset(X) ((X) - kernel_info)
 
-   .global kernel_info
+   .section ".rodata.kernel_info", "a"
 
-kernel_info:
+   .balign 16
+SYM_DATA_START(kernel_info)
/* Header, Linux top (structure). */
.ascii  "LToP"
/* Size. */
@@ -17,6 +25,40 @@ kernel_info:
/* Maximal allowed type for setup_data and setup_indirect structs. */
.long   SETUP_TYPE_MAX
 
+   /* Offset to the MLE header structure */
+#if IS_ENABLED(CONFIG_SECURE_LAUNCH)
+   .long   roffset(mle_header_offset)
+#else
+   .long   0
+#endif
+
 kernel_info_var_len_data:
/* Empty for time being... */
-kernel_info_end:
+SYM_DATA_END_LABEL(kernel_info, SYM_L_LOCAL, kernel_info_end)
+
+#if IS_ENABLED(CONFIG_SECURE_LAUNCH)
+   /*
+* The MLE Header per the TXT Specification, section 2.1
+* MLE capabilities, see table 4. Capabilities set:
+* bit 0: Support for GETSEC[WAKEUP] for RLP wakeup
+* bit 1: Support for RLP wakeup using MONITOR address
+* bit 2: The ECX register will contain the pointer to the MLE page 
table
+* bit 5: TPM 1.2 family: Details/authorities PCR usage support
+* bit 9: Supported format of TPM 2.0 event log - TCG compliant
+*/
+SYM_DATA_START(mle_header)
+   .long   0x9082ac5a  /* UUID0 */
+   .long   0x74a7476f  /* UUID1 */
+   .long   0xa2555c0f  /* UUID2 */
+   .long   0x42b651cb  /* UUID3 */
+   .long   0x0034  /* MLE header size */
+   .long   0x00020002  /* MLE version 2.2 */
+   .long   roffset(sl_stub_entry_offset) /* Linear entry point of MLE 
(virt. address) */
+   .long   0x  /* First valid page of MLE */
+   .long   0x  /* Offset within binary of first byte of MLE */
+   .long   roffset(_edata_offset) /* Offset within binary of last byte + 1 
of MLE */
+   .long   0x0227  /* Bit vector of MLE-supported capabilities */
+   .long   0x  /* Starting linear address of command line (unused) 
*/
+   .long   0x  /* Ending linear address of command line (unused) */
+SYM_DATA_END(mle_header)
+#endif
diff --git a/arch/x86/boot/compressed/vmlinux.lds.S 
b/arch/x86/boot/compressed/vmlinux.lds.S
index 083ec6d7722a..f82184801462 100644
--- a/arch/x86/boot/compressed/vmlinux.lds.S
+++ b/arch/x86/boot/compressed/vmlinux.lds.S
@@ -118,3 +118,10 @@ SECTIONS
}
ASSERT(SIZEOF(.rela.dyn) == 0, "Unexpected run-time relocations (.rela) 
detected!")
 }
+
+#ifdef CONFIG_SECURE_LAUNCH
+PROVIDE(kernel_info_offset  = ABSOLUTE(kernel_info - startup_32));
+PROVIDE(mle_header_offset   = kernel_info_offset + ABSOLUTE(mle_header - 
startup_32));
+PROVIDE(sl_stub_entry_offset= kernel_info_offset + ABSOLUTE(sl_stub_entry 
- startup_32));
+PROVIDE(_edata_offset   = kernel_info_offset + ABSOLUTE(_edata - 
startup_32));
+#endif
-- 
2.39.3




[PATCH v10 01/20] Documentation/x86: Secure Launch kernel documentation

2024-08-26 Thread Ross Philipson
From: "Daniel P. Smith" 

Introduce background, overview and configuration/ABI information
for the Secure Launch kernel feature.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
Reviewed-by: Bagas Sanjaya 
---
 Documentation/security/index.rst  |   1 +
 .../security/launch-integrity/index.rst   |  11 +
 .../security/launch-integrity/principles.rst  | 320 ++
 .../secure_launch_details.rst | 587 ++
 .../secure_launch_overview.rst| 227 +++
 5 files changed, 1146 insertions(+)
 create mode 100644 Documentation/security/launch-integrity/index.rst
 create mode 100644 Documentation/security/launch-integrity/principles.rst
 create mode 100644 
Documentation/security/launch-integrity/secure_launch_details.rst
 create mode 100644 
Documentation/security/launch-integrity/secure_launch_overview.rst

diff --git a/Documentation/security/index.rst b/Documentation/security/index.rst
index 59f8fc106cb0..56e31fb3d91f 100644
--- a/Documentation/security/index.rst
+++ b/Documentation/security/index.rst
@@ -19,3 +19,4 @@ Security Documentation
digsig
landlock
secrets/index
+   launch-integrity/index
diff --git a/Documentation/security/launch-integrity/index.rst 
b/Documentation/security/launch-integrity/index.rst
new file mode 100644
index ..838328186dd2
--- /dev/null
+++ b/Documentation/security/launch-integrity/index.rst
@@ -0,0 +1,11 @@
+=
+System Launch Integrity documentation
+=
+
+.. toctree::
+   :maxdepth: 1
+
+   principles
+   secure_launch_overview
+   secure_launch_details
+
diff --git a/Documentation/security/launch-integrity/principles.rst 
b/Documentation/security/launch-integrity/principles.rst
new file mode 100644
index ..d6d95099dfad
--- /dev/null
+++ b/Documentation/security/launch-integrity/principles.rst
@@ -0,0 +1,320 @@
+.. SPDX-License-Identifier: GPL-2.0
+.. Copyright © 2019-2024 Daniel P. Smith 
+
+===
+System Launch Integrity
+===
+
+:Author: Daniel P. Smith
+:Date: August 2024
+
+This document serves to establish a common understanding of what is system
+launch, the integrity concern for system launch, and why using a Root of Trust
+(RoT) from a Dynamic Launch may be desired. Throughout this document
+terminology from the Trusted Computing Group (TCG) and National Institute for
+Science and Technology (NIST) is used to ensure a vendor natural language is
+used to describe and reference security-related concepts.
+
+System Launch
+=
+
+There is a tendency to only consider the classical power-on boot as the only
+means to launch an Operating System (OS) on a computer system, but in fact most
+modern processors support two methods to launch the system. To provide clarity
+a common definition of a system launch should be established. This definition
+is that a during a single power life cycle of a system, a System Launch
+consists of an initialization event, typically in hardware, that is followed by
+an executing software payload that takes the system from the initialized state
+to a running state. Driven by the Trusted Computing Group (TCG) architecture,
+modern processors are able to support two methods to launch a system, these two
+types of system launch are known as Static Launch and Dynamic Launch.
+
+Static Launch
+-
+
+Static launch is the system launch associated with the power cycle of the CPU.
+Thus, static launch refers to the classical power-on boot where the
+initialization event is the release of the CPU from reset and the system
+firmware is the software payload that brings the system up to a running state.
+Since static launch is the system launch associated with the beginning of the
+power lifecycle of a system, it is therefore a fixed, one-time system launch.
+It is because of this that static launch is referred to and thought of as being
+"static".
+
+Dynamic Launch
+--
+
+Modern CPUs architectures provides a mechanism to re-initialize the system to a
+"known good" state without requiring a power event. This re-initialization
+event is the event for a dynamic launch and is referred to as the Dynamic
+Launch Event (DLE). The DLE functions by accepting a software payload, referred
+to as the Dynamic Configuration Environment (DCE), that execution is handed to
+after the DLE is invoked. The DCE is responsible for bringing the system back
+to a running state. Since the dynamic launch is not tied to a power event like
+the static launch, this enables a dynamic launch to be initiated at any time
+and multiple times during a single power life cycle. This dynamism is the
+reasoning behind referring to this system launch as being dynamic.
+
+Because a dynamic launch can be conducted at any time during a single power
+life cycle, they are classified into one of two types, an early launc

[PATCH v10 07/20] x86/msr: Add variable MTRR base/mask and x2apic ID registers

2024-08-26 Thread Ross Philipson
These values are needed by Secure Launch to locate particular CPUs
during AP startup and to restore the MTRR state after a TXT launch.

Signed-off-by: Ross Philipson 
---
 arch/x86/include/asm/msr-index.h | 5 +
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 82c6a4d350e0..9fbc0e554f99 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -348,6 +348,9 @@
 #define MSR_IA32_RTIT_OUTPUT_BASE  0x0560
 #define MSR_IA32_RTIT_OUTPUT_MASK  0x0561
 
+#define MSR_MTRRphysBase0  0x0200
+#define MSR_MTRRphysMask0  0x0201
+
 #define MSR_MTRRfix64K_0   0x0250
 #define MSR_MTRRfix16K_8   0x0258
 #define MSR_MTRRfix16K_A   0x0259
@@ -859,6 +862,8 @@
 #define MSR_IA32_APICBASE_ENABLE   (1<<11)
 #define MSR_IA32_APICBASE_BASE (0xf<<12)
 
+#define MSR_IA32_X2APIC_APICID 0x0802
+
 #define MSR_IA32_UCODE_WRITE   0x0079
 #define MSR_IA32_UCODE_REV 0x008b
 
-- 
2.39.3




[PATCH v10 06/20] x86: Add early SHA-256 support for Secure Launch early measurements

2024-08-26 Thread Ross Philipson
From: "Daniel P. Smith" 

The SHA-256 algorithm is necessary to measure configuration information into
the TPM as early as possible before using the values. This implementation
uses the established approach of #including the SHA-256 libraries directly in
the code since the compressed kernel is not uncompressed at this point.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile | 2 +-
 arch/x86/boot/compressed/sha256.c | 6 ++
 2 files changed, 7 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/boot/compressed/sha256.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 7eb03afb841b..40dc0b9babd5 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -107,7 +107,7 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-libs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/sha1.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/sha1.o $(obj)/sha256.o
 
 $(obj)/vmlinux: $(vmlinux-objs-y) $(vmlinux-libs-y) FORCE
$(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/sha256.c 
b/arch/x86/boot/compressed/sha256.c
new file mode 100644
index ..293742a90ddc
--- /dev/null
+++ b/arch/x86/boot/compressed/sha256.c
@@ -0,0 +1,6 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2024 Apertus Solutions, LLC
+ */
+
+#include "../../../../lib/crypto/sha256.c"
-- 
2.39.3




[PATCH v10 05/20] x86: Add early SHA-1 support for Secure Launch early measurements

2024-08-26 Thread Ross Philipson
From: "Daniel P. Smith" 

Secure Launch is written to be compliant with the Intel TXT Measured Launch
Developer's Guide. The MLE Guide dictates that the system can be configured to
use both the SHA-1 and SHA-2 hashing algorithms.

Regardless of the preference towards SHA-2, if the firmware elected to start
with the SHA-1 and SHA-2 banks active and the dynamic launch was configured to
include SHA-1, Secure Launch is obligated to record measurements for all
algorithms requested in the launch configuration.

The user environment or the integrity management does not desire to use SHA-1,
it is free to just ignore the SHA-1 bank in any integrity operation with the
TPM. If there is a larger concern about the SHA-1 bank being active, it is free
to deliberately cap the SHA-1 PCRs, recording the event in the D-RTM log.

The SHA-1 code here has its origins in the code from the main kernel:

commit c4d5b9f ("crypto: sha1 - implement base layer for SHA-1")

A modified version of this code was introduced to the lib/crypto/sha1.c to bring
it in line with the SHA-256 code and allow it to be pulled into the setup kernel
in the same manner as SHA-256 is.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile |  2 +
 arch/x86/boot/compressed/sha1.c   |  6 +++
 include/crypto/sha1.h |  1 +
 lib/crypto/sha1.c | 82 +++
 4 files changed, 91 insertions(+)
 create mode 100644 arch/x86/boot/compressed/sha1.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index f2051644de94..7eb03afb841b 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -107,6 +107,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-libs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/sha1.o
+
 $(obj)/vmlinux: $(vmlinux-objs-y) $(vmlinux-libs-y) FORCE
$(call if_changed,ld)
 
diff --git a/arch/x86/boot/compressed/sha1.c b/arch/x86/boot/compressed/sha1.c
new file mode 100644
index ..d754489941ac
--- /dev/null
+++ b/arch/x86/boot/compressed/sha1.c
@@ -0,0 +1,6 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2024 Apertus Solutions, LLC.
+ */
+
+#include "../../../../lib/crypto/sha1.c"
diff --git a/include/crypto/sha1.h b/include/crypto/sha1.h
index 044ecea60ac8..d715dd5332e1 100644
--- a/include/crypto/sha1.h
+++ b/include/crypto/sha1.h
@@ -42,5 +42,6 @@ extern int crypto_sha1_finup(struct shash_desc *desc, const 
u8 *data,
 #define SHA1_WORKSPACE_WORDS   16
 void sha1_init(__u32 *buf);
 void sha1_transform(__u32 *digest, const char *data, __u32 *W);
+void sha1(const u8 *data, unsigned int len, u8 *out);
 
 #endif /* _CRYPTO_SHA1_H */
diff --git a/lib/crypto/sha1.c b/lib/crypto/sha1.c
index 6d2922747cab..edec3f43581f 100644
--- a/lib/crypto/sha1.c
+++ b/lib/crypto/sha1.c
@@ -137,5 +137,87 @@ void sha1_init(__u32 *buf)
 }
 EXPORT_SYMBOL(sha1_init);
 
+static void __sha1_transform(u32 *digest, const char *data)
+{
+   u32 ws[SHA1_WORKSPACE_WORDS];
+
+   sha1_transform(digest, data, ws);
+
+   /* Ensure local data for generating digest is cleared in all cases */
+   memzero_explicit(ws, sizeof(ws));
+}
+
+static void sha1_update(struct sha1_state *sctx, const u8 *data, unsigned int 
len)
+{
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+   int blocks;
+
+   sctx->count += len;
+
+   if (unlikely((partial + len) < SHA1_BLOCK_SIZE))
+   goto out;
+
+
+   if (partial) {
+   int p = SHA1_BLOCK_SIZE - partial;
+
+   memcpy(sctx->buffer + partial, data, p);
+   data += p;
+   len -= p;
+
+   __sha1_transform(sctx->state, sctx->buffer);
+   }
+
+   blocks = len / SHA1_BLOCK_SIZE;
+   len %= SHA1_BLOCK_SIZE;
+
+   if (blocks) {
+   while (blocks--) {
+   __sha1_transform(sctx->state, data);
+   data += SHA1_BLOCK_SIZE;
+   }
+   }
+   partial = 0;
+
+out:
+   memcpy(sctx->buffer + partial, data, len);
+}
+
+static void sha1_final(struct sha1_state *sctx, u8 *out)
+{
+   const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+   __be64 *bits = (__be64 *)(sctx->buffer + bit_offset);
+   __be32 *digest = (__be32 *)out;
+   int i;
+
+   sctx->buffer[partial++] = 0x80;
+   if (partial > bit_offset) {
+   memset(sctx->buffer + partial, 0x0, SHA1_BLOCK_SIZE - partial);
+   partial = 0;
+
+   __sha1_transform(sctx->state, sctx->buffer);
+   }
+
+   memset(sctx->buffer + partial, 0x0, bit_offset - partial);
+   *bits 

[PATCH v10 03/20] x86: Secure Launch Resource Table header file

2024-08-26 Thread Ross Philipson
Introduce the Secure Launch Resource Table which forms the formal
interface between the pre and post launch code.

Signed-off-by: Ross Philipson 
---
 include/linux/slr_table.h | 276 ++
 1 file changed, 276 insertions(+)
 create mode 100644 include/linux/slr_table.h

diff --git a/include/linux/slr_table.h b/include/linux/slr_table.h
new file mode 100644
index ..a44fd6fbce23
--- /dev/null
+++ b/include/linux/slr_table.h
@@ -0,0 +1,276 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * TrenchBoot Secure Launch Resource Table
+ *
+ * The Secure Launch Resource Table is TrenchBoot project defined
+ * specfication to provide cross-architecture compatibility. See
+ * TrenchBoot Secure Launch kernel documentation for details.
+ *
+ * Copyright (c) 2024 Apertus Solutions, LLC
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLR_TABLE_H
+#define _LINUX_SLR_TABLE_H
+
+/* Put this in efi.h if it becomes a standard */
+#define SLR_TABLE_GUID EFI_GUID(0x877a9b2a, 0x0385, 
0x45d1, 0xa0, 0x34, 0x9d, 0xac, 0x9c, 0x9e, 0x56, 0x5f)
+
+/* SLR table header values */
+#define SLR_TABLE_MAGIC0x4452544d
+#define SLR_TABLE_REVISION 1
+
+/* Current revisions for the policy and UEFI config */
+#define SLR_POLICY_REVISION1
+#define SLR_UEFI_CONFIG_REVISION   1
+
+/* SLR defined architectures */
+#define SLR_INTEL_TXT  1
+#define SLR_AMD_SKINIT 2
+
+/* SLR defined bootloaders */
+#define SLR_BOOTLOADER_INVALID 0
+#define SLR_BOOTLOADER_GRUB1
+
+/* Log formats */
+#define SLR_DRTM_TPM12_LOG 1
+#define SLR_DRTM_TPM20_LOG 2
+
+/* DRTM Policy Entry Flags */
+#define SLR_POLICY_FLAG_MEASURED   0x1
+#define SLR_POLICY_IMPLICIT_SIZE   0x2
+
+/* Array Lengths */
+#define TPM_EVENT_INFO_LENGTH  32
+#define TXT_VARIABLE_MTRRS_LENGTH  32
+
+/* Tags */
+#define SLR_ENTRY_INVALID  0x
+#define SLR_ENTRY_DL_INFO  0x0001
+#define SLR_ENTRY_LOG_INFO 0x0002
+#define SLR_ENTRY_ENTRY_POLICY 0x0003
+#define SLR_ENTRY_INTEL_INFO   0x0004
+#define SLR_ENTRY_AMD_INFO 0x0005
+#define SLR_ENTRY_ARM_INFO 0x0006
+#define SLR_ENTRY_UEFI_INFO0x0007
+#define SLR_ENTRY_UEFI_CONFIG  0x0008
+#define SLR_ENTRY_END  0x
+
+/* Entity Types */
+#define SLR_ET_UNSPECIFIED 0x
+#define SLR_ET_SLRT0x0001
+#define SLR_ET_BOOT_PARAMS 0x0002
+#define SLR_ET_SETUP_DATA  0x0003
+#define SLR_ET_CMDLINE 0x0004
+#define SLR_ET_UEFI_MEMMAP 0x0005
+#define SLR_ET_RAMDISK 0x0006
+#define SLR_ET_TXT_OS2MLE  0x0010
+#define SLR_ET_UNUSED  0x
+
+#ifndef __ASSEMBLY__
+
+/*
+ * Primary Secure Launch Resource Table Header
+ */
+struct slr_table {
+   u32 magic;
+   u16 revision;
+   u16 architecture;
+   u32 size;
+   u32 max_size;
+   /* table entries */
+} __packed;
+
+/*
+ * Common SLRT Table Header
+ */
+struct slr_entry_hdr {
+   u32 tag;
+   u32 size;
+} __packed;
+
+/*
+ * Boot loader context
+ */
+struct slr_bl_context {
+   u16 bootloader;
+   u16 reserved[3];
+   u64 context;
+} __packed;
+
+/*
+ * Dynamic Launch Callback Function type
+ */
+typedef void (*dl_handler_func)(struct slr_bl_context *bl_context);
+
+/*
+ * DRTM Dynamic Launch Configuration
+ */
+struct slr_entry_dl_info {
+   struct slr_entry_hdr hdr;
+   u64 dce_size;
+   u64 dce_base;
+   u64 dlme_size;
+   u64 dlme_base;
+   u64 dlme_entry;
+   struct slr_bl_context bl_context;
+   u64 dl_handler;
+} __packed;
+
+/*
+ * TPM Log Information
+ */
+struct slr_entry_log_info {
+   struct slr_entry_hdr hdr;
+   u16 format;
+   u16 reserved;
+   u32 size;
+   u64 addr;
+} __packed;
+
+/*
+ * DRTM Measurement Entry
+ */
+struct slr_policy_entry {
+   u16 pcr;
+   u16 entity_type;
+   u16 flags;
+   u16 reserved;
+   u64 size;
+   u64 entity;
+   char evt_info[TPM_EVENT_INFO_LENGTH];
+} __packed;
+
+/*
+ * DRTM Measurement Policy
+ */
+struct slr_entry_policy {
+   struct slr_entry_hdr hdr;
+   u16 reserved[2];
+   u16 revision;
+   u16 nr_entries;
+   struct slr_policy_entry policy_entries[];
+} __packed;
+
+/*
+ * Secure Launch defined MTRR saving structures
+ */
+struct slr_txt_mtrr_pair {
+   u64 mtrr_physbase;
+   u64 mtrr_physmask;
+} __packed;
+
+struct slr_txt_mtrr_state {
+   u64 default_mem_type;
+   u64 mtrr_vcnt;
+   struct slr_txt_mtrr_pair mtrr_pair[TXT_VARIABLE_MTRRS_LENGTH];
+} __packed;
+
+/*
+ * Intel TXT Info table
+ */
+struct slr_entry_intel_info {
+   struct slr_entry_hdr hdr;
+   u64 txt_heap;
+   u64 saved_misc_enable_msr;
+   struct slr_txt_mtrr_state saved_bsp_mtrrs;
+} __packed;
+
+/*
+ * UEFI config measurement entry
+ */
+struct slr_uefi_cfg_entry {
+   u16 pcr;
+   u16 reserved;
+   u32 size;
+   u64 cfg; /* address or

[PATCH v10 02/20] x86: Secure Launch Kconfig

2024-08-26 Thread Ross Philipson
Initial bits to bring in Secure Launch functionality. Add Kconfig
options for compiling in/out the Secure Launch code.

Signed-off-by: Ross Philipson 
---
 arch/x86/Kconfig | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 007bab9f2a0e..24df5f468fdc 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2056,6 +2056,17 @@ config EFI_RUNTIME_MAP
 
  See also Documentation/ABI/testing/sysfs-firmware-efi-runtime-map.
 
+config SECURE_LAUNCH
+   bool "Secure Launch support"
+   depends on X86_64 && X86_X2APIC && TCG_TPM && CRYPTO_LIB_SHA1 && 
CRYPTO_LIB_SHA256
+   help
+  The Secure Launch feature allows a kernel to be loaded
+  directly through an Intel TXT measured launch. Intel TXT
+  establishes a Dynamic Root of Trust for Measurement (DRTM)
+  where the CPU measures the kernel image. This feature then
+  continues the measurement chain over kernel configuration
+  information and init images.
+
 source "kernel/Kconfig.hz"
 
 config ARCH_SUPPORTS_KEXEC
-- 
2.39.3




Re: [PATCH v9 09/19] x86: Secure Launch kernel late boot stub

2024-08-12 Thread ross . philipson

On 6/4/24 3:59 PM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

The routine slaunch_setup is called out of the x86 specific setup_arch()
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.

For TXT, this code also reserves the original compressed kernel setup
area where the APs were left looping so that this memory cannot be used.

Signed-off-by: Ross Philipson 
---
  arch/x86/kernel/Makefile   |   1 +
  arch/x86/kernel/setup.c|   3 +
  arch/x86/kernel/slaunch.c  | 525 +
  drivers/iommu/intel/dmar.c |   4 +
  4 files changed, 533 insertions(+)
  create mode 100644 arch/x86/kernel/slaunch.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5d128167e2e2..b35ca99ab0a0 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -76,6 +76,7 @@ obj-$(CONFIG_X86_32)  += tls.o
  obj-$(CONFIG_IA32_EMULATION)  += tls.o
  obj-y += step.o
  obj-$(CONFIG_INTEL_TXT)   += tboot.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o


Hmm... should that be CONFIG_X86_SECURE_LAUNCH?


Further thoughts on this after discussions...

The Secure Launch feature will cover other architectures beyond x86 in 
the future. We may have to rework/move the config settings at that point 
but for now I don't think we want to change it.


Thanks
Ross



Just asking...

BR, Jarkko






Re: [PATCH v9 04/19] x86: Secure Launch Resource Table header file

2024-06-20 Thread ross . philipson

On 6/19/24 5:18 PM, Jarkko Sakkinen wrote:

On Thu Jun 6, 2024 at 7:49 PM EEST,  wrote:

For any architectures dig a similar fact:

1. Is not dead.
2. Will be there also in future.

Make any architecture existentially relevant for and not too much
coloring in the text that is easy to check.

It is nearing 5k lines so you should be really good with measured
facts too (not just launch) :-)


... but overall I get your meaning. We will spend time on this sort of
documentation for the v10 release.


Yeah, I mean we live in the universe of 3 letter acronyms so
it is better to summarize the existential part, especially
in a ~5 KSLOC patch set ;-)


Indeed, thanks.

Ross



BR, Jarkko






Re: [PATCH v9 04/19] x86: Secure Launch Resource Table header file

2024-06-06 Thread ross . philipson

On 6/5/24 11:02 PM, Jarkko Sakkinen wrote:

On Wed Jun 5, 2024 at 10:03 PM EEST,  wrote:

So I did not mean to imply that DRTM support on various
platforms/architectures has a short expiration date. In fact we are
actively working on DRTM support through the TrenchBoot project on
several platforms/architectures. Just a quick rundown here:

Intel: Plenty of Intel platforms are vPro with TXT. It is really just
the lower end systems that don't have it available (like Core i3). And
my guess was wrong about x86s. You can find the spec on the page in the
following link. There is an entire subsection on SMX support on x86s and
the changes to the various GETSEC instruction leaves that were made to
make it work there (see 3.15).

https://urldefense.com/v3/__https://www.intel.com/content/www/us/en/developer/articles/technical/envisioning-future-simplified-architecture.html__;!!ACWV5N9M2RV99hQ!Lt-srkRLHstA9PPCB-NWogvHP-9mfh2bHjkml-lARY79BhYlWJjhrHb6RyCN_WdGstcABq1FdqPUKn5dCdw$


Happend to bump into same PDF specification and exactly the seeked
information is "3.15 SMX Changes". So just write this down to some
patch that starts adding SMX things.

Link: 
https://urldefense.com/v3/__https://cdrdv2.intel.com/v1/dl/getContent/776648__;!!ACWV5N9M2RV99hQ!Lt-srkRLHstA9PPCB-NWogvHP-9mfh2bHjkml-lARY79BhYlWJjhrHb6RyCN_WdGstcABq1FdqPUuZy8Sfk$

So link and document, and other stuff above is not relevant from
upstream context, only potential maintenance burden :-)


I am not 100% sure what you mean exactly here...



For any architectures dig a similar fact:

1. Is not dead.
2. Will be there also in future.

Make any architecture existentially relevant for and not too much
coloring in the text that is easy to check.

It is nearing 5k lines so you should be really good with measured
facts too (not just launch) :-)


... but overall I get your meaning. We will spend time on this sort of 
documentation for the v10 release.


Thanks for the feedback,
Ross



BR, Jarkko






Re: [PATCH v9 04/19] x86: Secure Launch Resource Table header file

2024-06-05 Thread ross . philipson

On 6/4/24 9:04 PM, Jarkko Sakkinen wrote:

On Wed Jun 5, 2024 at 5:33 AM EEST,  wrote:

On 6/4/24 5:22 PM, Jarkko Sakkinen wrote:

On Wed Jun 5, 2024 at 2:00 AM EEST,  wrote:

On 6/4/24 3:36 PM, Jarkko Sakkinen wrote:

On Tue Jun 4, 2024 at 11:31 PM EEST,  wrote:

On 6/4/24 11:21 AM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

Introduce the Secure Launch Resource Table which forms the formal
interface between the pre and post launch code.

Signed-off-by: Ross Philipson 


If a uarch specific, I'd appreciate Intel SDM reference here so that I
can look it up and compare. Like in section granularity.


This table is meant to not be architecture specific though it can
contain architecture specific sub-entities. E.g. there is a TXT specific
table and in the future there will be an AMD and ARM one (and hopefully
some others). I hope that addresses what you are pointing out or maybe I
don't fully understand what you mean here...


At least Intel SDM has a definition of any possible architecture
specific data structure. It is handy to also have this available
in inline comment for any possible such structure pointing out the
section where it is defined.


The TXT specific structure is not defined in the SDM or the TXT dev
guide. Part of it is driven by requirements in the TXT dev guide but
that guide does not contain implementation details.

That said, if you would like links to relevant documents in the comments
before arch specific structures, I can add them.


Vol. 2D 7-40, in the description of GETSEC[WAKEUP] there is in fact a
description of MLE JOINT structure at least:

1. GDT limit (offset 0)
2. GDT base (offset 4)
3. Segment selector initializer (offset 8)
4. EIP (offset 12)

So is this only exercised in protect mode, and not in long mode? Just
wondering whether I should make a bug report on this for SDM or not.


I believe you can issue the SENTER instruction in long mode, compat mode
or protected mode. On the other side thought, you will pop out of the
TXT initialization in protected mode. The SDM outlines what registers
will hold what values and what is valid and not valid. The APs will also
vector through the join structure mentioned above to the location
specified in protected mode using the GDT information you provide.



Especially this puzzles me, given that x86s won't have protected
mode in the first place...


My guess is the simplified x86 architecture will not support TXT. It is
not supported on a number of CPUs/chipsets as it stands today. Just a
guess but we know only vPro systems support TXT today.


I'm wondering could this bootstrap itself inside TDX or SNP, and that
way provide path forward? AFAIK, TDX can be nested straight of the bat
and SNP from 2nd generation EPYC's, which contain the feature.

I do buy the idea of attesting the host, not just the guests, even in
the "confidential world". That said, I'm not sure does it make sense
to add all this infrastructure for a technology with such a short
expiration date?

I would not want to say this at v9, and it is not really your fault
either, but for me this would make a lot more sense if the core of
Trenchboot was redesigned around these newer technologies with a
long-term future.


So I did not mean to imply that DRTM support on various 
platforms/architectures has a short expiration date. In fact we are 
actively working on DRTM support through the TrenchBoot project on 
several platforms/architectures. Just a quick rundown here:


Intel: Plenty of Intel platforms are vPro with TXT. It is really just 
the lower end systems that don't have it available (like Core i3). And 
my guess was wrong about x86s. You can find the spec on the page in the 
following link. There is an entire subsection on SMX support on x86s and 
the changes to the various GETSEC instruction leaves that were made to 
make it work there (see 3.15).


https://www.intel.com/content/www/us/en/developer/articles/technical/envisioning-future-simplified-architecture.html

AMD: We are actively working on SKINIT DRTM support that will go into 
TrenchBoot. There are changes coming soon to AMD SKINIT to make it more 
robust and address some earlier issues. We hope to be able to start 
sending AMD DRTM support up in the posts to LKML in the not too distant 
future.


Arm: They have recently released their DRTM specification and at least 
one Arm vendor is close to releasing firmware that will support DRTM. 
Again we are actively working in this area on the TrenchBoot project.


https://developer.arm.com/documentation/den0113/latest/

One final thought I had. The technologies you mentioned above seem to be 
to be complementary to DRTM as opposed to being a replacement for it, at 
least to me but I am not an expert on them.


Perhaps Daniel Smith would like to expand on what I have said here.

Thanks
Ross




The idea itself is great!

BR, Jarkko






Re: [PATCH v9 04/19] x86: Secure Launch Resource Table header file

2024-06-04 Thread ross . philipson

On 6/4/24 5:22 PM, Jarkko Sakkinen wrote:

On Wed Jun 5, 2024 at 2:00 AM EEST,  wrote:

On 6/4/24 3:36 PM, Jarkko Sakkinen wrote:

On Tue Jun 4, 2024 at 11:31 PM EEST,  wrote:

On 6/4/24 11:21 AM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

Introduce the Secure Launch Resource Table which forms the formal
interface between the pre and post launch code.

Signed-off-by: Ross Philipson 


If a uarch specific, I'd appreciate Intel SDM reference here so that I
can look it up and compare. Like in section granularity.


This table is meant to not be architecture specific though it can
contain architecture specific sub-entities. E.g. there is a TXT specific
table and in the future there will be an AMD and ARM one (and hopefully
some others). I hope that addresses what you are pointing out or maybe I
don't fully understand what you mean here...


At least Intel SDM has a definition of any possible architecture
specific data structure. It is handy to also have this available
in inline comment for any possible such structure pointing out the
section where it is defined.


The TXT specific structure is not defined in the SDM or the TXT dev
guide. Part of it is driven by requirements in the TXT dev guide but
that guide does not contain implementation details.

That said, if you would like links to relevant documents in the comments
before arch specific structures, I can add them.


Vol. 2D 7-40, in the description of GETSEC[WAKEUP] there is in fact a
description of MLE JOINT structure at least:

1. GDT limit (offset 0)
2. GDT base (offset 4)
3. Segment selector initializer (offset 8)
4. EIP (offset 12)

So is this only exercised in protect mode, and not in long mode? Just
wondering whether I should make a bug report on this for SDM or not.


I believe you can issue the SENTER instruction in long mode, compat mode 
or protected mode. On the other side thought, you will pop out of the 
TXT initialization in protected mode. The SDM outlines what registers 
will hold what values and what is valid and not valid. The APs will also 
vector through the join structure mentioned above to the location 
specified in protected mode using the GDT information you provide.




Especially this puzzles me, given that x86s won't have protected
mode in the first place...


My guess is the simplified x86 architecture will not support TXT. It is 
not supported on a number of CPUs/chipsets as it stands today. Just a 
guess but we know only vPro systems support TXT today.


Thanks
Ross



BR, Jarkko






Re: [PATCH v9 16/19] tpm: Add ability to set the preferred locality the TPM chip uses

2024-06-04 Thread ross . philipson

On 6/4/24 3:50 PM, Jarkko Sakkinen wrote:

On Wed Jun 5, 2024 at 1:14 AM EEST,  wrote:

On 6/4/24 1:27 PM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

Curently the locality is hard coded to 0 but for DRTM support, access
is needed to localities 1 through 4.

Signed-off-by: Ross Philipson 
---
   drivers/char/tpm/tpm-chip.c  | 24 +++-
   drivers/char/tpm/tpm-interface.c | 15 +++
   drivers/char/tpm/tpm.h   |  1 +
   include/linux/tpm.h  |  4 
   4 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index 854546000c92..73eac54d61fb 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -44,7 +44,7 @@ static int tpm_request_locality(struct tpm_chip *chip)
if (!chip->ops->request_locality)
return 0;
   
-	rc = chip->ops->request_locality(chip, 0);

+   rc = chip->ops->request_locality(chip, chip->pref_locality);
if (rc < 0)
return rc;
   
@@ -143,6 +143,27 @@ void tpm_chip_stop(struct tpm_chip *chip)

   }
   EXPORT_SYMBOL_GPL(tpm_chip_stop);
   
+/**

+ * tpm_chip_preferred_locality() - set the TPM chip preferred locality to open
+ * @chip:  a TPM chip to use
+ * @locality:   the preferred locality
+ *
+ * Return:
+ * * true  - Preferred locality set
+ * * false - Invalid locality specified
+ */
+bool tpm_chip_preferred_locality(struct tpm_chip *chip, int locality)
+{
+   if (locality < 0 || locality >=TPM_MAX_LOCALITY)
+   return false;
+
+   mutex_lock(&chip->tpm_mutex);
+   chip->pref_locality = locality;
+   mutex_unlock(&chip->tpm_mutex);
+   return true;
+}
+EXPORT_SYMBOL_GPL(tpm_chip_preferred_locality);
+
   /**
* tpm_try_get_ops() - Get a ref to the tpm_chip
* @chip: Chip to ref
@@ -374,6 +395,7 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
}
   
   	chip->locality = -1;

+   chip->pref_locality = 0;
return chip;
   
   out:

diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
index 5da134f12c9a..35f14ccecf0e 100644
--- a/drivers/char/tpm/tpm-interface.c
+++ b/drivers/char/tpm/tpm-interface.c
@@ -274,6 +274,21 @@ int tpm_is_tpm2(struct tpm_chip *chip)
   }
   EXPORT_SYMBOL_GPL(tpm_is_tpm2);
   
+/**

+ * tpm_preferred_locality() - set the TPM chip preferred locality to open
+ * @chip:  a TPM chip to use
+ * @locality:   the preferred locality
+ *
+ * Return:
+ * * true  - Preferred locality set
+ * * false - Invalid locality specified
+ */
+bool tpm_preferred_locality(struct tpm_chip *chip, int locality)
+{
+   return tpm_chip_preferred_locality(chip, locality);
+}
+EXPORT_SYMBOL_GPL(tpm_preferred_locality);


   What good does this extra wrapping do?

   tpm_set_default_locality() and default_locality would make so much more
   sense in any case.


Are you mainly just talking about my naming choices here and in the
follow-on response? Can you clarify what you are requesting?


I'd prefer:

1. Name the variable as default_locality.
2. Only create a single expored to function to tpm-chip.c:
tpm_chip_set_default_locality().
3. Call this function in all call sites.

"tpm_preferred_locality" should be just removed, as tpm_chip_*
is exported anyway.


Ok got it, thanks.



BR, Jarkko






Re: [PATCH v9 04/19] x86: Secure Launch Resource Table header file

2024-06-04 Thread ross . philipson

On 6/4/24 3:36 PM, Jarkko Sakkinen wrote:

On Tue Jun 4, 2024 at 11:31 PM EEST,  wrote:

On 6/4/24 11:21 AM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

Introduce the Secure Launch Resource Table which forms the formal
interface between the pre and post launch code.

Signed-off-by: Ross Philipson 


If a uarch specific, I'd appreciate Intel SDM reference here so that I
can look it up and compare. Like in section granularity.


This table is meant to not be architecture specific though it can
contain architecture specific sub-entities. E.g. there is a TXT specific
table and in the future there will be an AMD and ARM one (and hopefully
some others). I hope that addresses what you are pointing out or maybe I
don't fully understand what you mean here...


At least Intel SDM has a definition of any possible architecture
specific data structure. It is handy to also have this available
in inline comment for any possible such structure pointing out the
section where it is defined.


The TXT specific structure is not defined in the SDM or the TXT dev 
guide. Part of it is driven by requirements in the TXT dev guide but 
that guide does not contain implementation details.


That said, if you would like links to relevant documents in the comments 
before arch specific structures, I can add them.


Ross



BR, Jarkko





Re: [PATCH v9 16/19] tpm: Add ability to set the preferred locality the TPM chip uses

2024-06-04 Thread ross . philipson

On 6/4/24 1:27 PM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

Curently the locality is hard coded to 0 but for DRTM support, access
is needed to localities 1 through 4.

Signed-off-by: Ross Philipson 
---
  drivers/char/tpm/tpm-chip.c  | 24 +++-
  drivers/char/tpm/tpm-interface.c | 15 +++
  drivers/char/tpm/tpm.h   |  1 +
  include/linux/tpm.h  |  4 
  4 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index 854546000c92..73eac54d61fb 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -44,7 +44,7 @@ static int tpm_request_locality(struct tpm_chip *chip)
if (!chip->ops->request_locality)
return 0;
  
-	rc = chip->ops->request_locality(chip, 0);

+   rc = chip->ops->request_locality(chip, chip->pref_locality);
if (rc < 0)
return rc;
  
@@ -143,6 +143,27 @@ void tpm_chip_stop(struct tpm_chip *chip)

  }
  EXPORT_SYMBOL_GPL(tpm_chip_stop);
  
+/**

+ * tpm_chip_preferred_locality() - set the TPM chip preferred locality to open
+ * @chip:  a TPM chip to use
+ * @locality:   the preferred locality
+ *
+ * Return:
+ * * true  - Preferred locality set
+ * * false - Invalid locality specified
+ */
+bool tpm_chip_preferred_locality(struct tpm_chip *chip, int locality)
+{
+   if (locality < 0 || locality >=TPM_MAX_LOCALITY)
+   return false;
+
+   mutex_lock(&chip->tpm_mutex);
+   chip->pref_locality = locality;
+   mutex_unlock(&chip->tpm_mutex);
+   return true;
+}
+EXPORT_SYMBOL_GPL(tpm_chip_preferred_locality);
+
  /**
   * tpm_try_get_ops() - Get a ref to the tpm_chip
   * @chip: Chip to ref
@@ -374,6 +395,7 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
}
  
  	chip->locality = -1;

+   chip->pref_locality = 0;
return chip;
  
  out:

diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
index 5da134f12c9a..35f14ccecf0e 100644
--- a/drivers/char/tpm/tpm-interface.c
+++ b/drivers/char/tpm/tpm-interface.c
@@ -274,6 +274,21 @@ int tpm_is_tpm2(struct tpm_chip *chip)
  }
  EXPORT_SYMBOL_GPL(tpm_is_tpm2);
  
+/**

+ * tpm_preferred_locality() - set the TPM chip preferred locality to open
+ * @chip:  a TPM chip to use
+ * @locality:   the preferred locality
+ *
+ * Return:
+ * * true  - Preferred locality set
+ * * false - Invalid locality specified
+ */
+bool tpm_preferred_locality(struct tpm_chip *chip, int locality)
+{
+   return tpm_chip_preferred_locality(chip, locality);
+}
+EXPORT_SYMBOL_GPL(tpm_preferred_locality);


  What good does this extra wrapping do?

  tpm_set_default_locality() and default_locality would make so much more
  sense in any case.


Are you mainly just talking about my naming choices here and in the 
follow-on response? Can you clarify what you are requesting?


Thanks
Ross



  BR, Jarkko





Re: [PATCH v9 10/19] x86: Secure Launch SMP bringup support

2024-06-04 Thread ross . philipson

On 6/4/24 1:05 PM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

On Intel, the APs are left in a well documented state after TXT performs
the late launch. Specifically they cannot have #INIT asserted on them so
a standard startup via INIT/SIPI/SIPI cannot be performed. Instead the
early SL stub code uses MONITOR and MWAIT to park the APs. The realmode/init.c
code updates the jump address for the waiting APs with the location of the
Secure Launch entry point in the RM piggy after it is loaded and fixed up.
As the APs are woken up by writing the monitor, the APs jump to the Secure
Launch entry point in the RM piggy which mimics what the real mode code would
do then jumps to the standard RM piggy protected mode entry point.

Signed-off-by: Ross Philipson 
---
  arch/x86/include/asm/realmode.h  |  3 ++
  arch/x86/kernel/smpboot.c| 58 +++-
  arch/x86/realmode/init.c |  3 ++
  arch/x86/realmode/rm/header.S|  3 ++
  arch/x86/realmode/rm/trampoline_64.S | 32 +++
  5 files changed, 97 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index 87e5482acd0d..339b48e2543d 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -38,6 +38,9 @@ struct real_mode_header {
  #ifdef CONFIG_X86_64
u32 machine_real_restart_seg;
  #endif
+#ifdef CONFIG_SECURE_LAUNCH
+   u32 sl_trampoline_start32;
+#endif
  };
  
  /* This must match data at realmode/rm/trampoline_{32,64}.S */

diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 0c35207320cb..adb521221d6c 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -60,6 +60,7 @@
  #include 
  #include 
  #include 
+#include 
  
  #include 

  #include 
@@ -868,6 +869,56 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
return 0;
  }
  
+#ifdef CONFIG_SECURE_LAUNCH

+
+static bool slaunch_is_txt_launch(void)
+{
+   if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) ==
+   (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
+   return true;
+
+   return false;
+}


static inline bool slaunch_is_txt_launch(void)
{
u32 mask =  SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT;

return slaunch_get_flags() & mask == mask;
}


Actually I think I can take your suggested change and move this function 
to the main header files since this check is done elsewhere. And later I 
can make others like slaunch_is_skinit_launch(). Thanks.






+
+/*
+ * TXT AP startup is quite different than normal. The APs cannot have #INIT
+ * asserted on them or receive SIPIs. The early Secure Launch code has parked
+ * the APs using monitor/mwait. This will wake the APs by writing the monitor
+ * and have them jump to the protected mode code in the rmpiggy where the rest
+ * of the SMP boot of the AP will proceed normally.
+ */
+static void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+   struct sl_ap_wake_info *ap_wake_info;
+   struct sl_ap_stack_and_monitor *stack_monitor = NULL;


struct sl_ap_stack_and_monitor *stack_monitor; /* note: no initialization */
struct sl_ap_wake_info *ap_wake_info;


Will fix.





+
+   ap_wake_info = slaunch_get_ap_wake_info();
+
+   stack_monitor = (struct sl_ap_stack_and_monitor 
*)__va(ap_wake_info->ap_wake_block +
+  
ap_wake_info->ap_stacks_offset);
+
+   for (unsigned int i = TXT_MAX_CPUS - 1; i >= 0; i--) {
+   if (stack_monitor[i].apicid == apicid) {
+   /* Write the monitor */


I'd remove this comment.


Sure.

Ross




+   stack_monitor[i].monitor = 1;
+   break;
+   }
+   }
+}
+
+#else
+
+static inline bool slaunch_is_txt_launch(void)
+{
+   return false;
+}
+
+static inline void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+}
+
+#endif  /* !CONFIG_SECURE_LAUNCH */
+
  /*
   * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
   * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
@@ -877,7 +928,7 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
  static int do_boot_cpu(u32 apicid, int cpu, struct task_struct *idle)
  {
unsigned long start_ip = real_mode_header->trampoline_start;
-   int ret;
+   int ret = 0;
  
  #ifdef CONFIG_X86_64

/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
@@ -922,12 +973,15 @@ static int do_boot_cpu(u32 apicid, int cpu, struct 
task_struct *idle)
  
  	/*

 * Wake up a CPU in difference cases:
+* - Intel TXT DRTM launch uses its own method to wake the APs
 * - Use a method from the APIC driver if one defined, with wakeup
 *   straight to 64-bit mode preferred over wakeup to RM.
 * Otherwise,

Re: [PATCH v9 09/19] x86: Secure Launch kernel late boot stub

2024-06-04 Thread ross . philipson

On 6/4/24 12:59 PM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

The routine slaunch_setup is called out of the x86 specific setup_arch()
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.

For TXT, this code also reserves the original compressed kernel setup
area where the APs were left looping so that this memory cannot be used.

Signed-off-by: Ross Philipson 
---
  arch/x86/kernel/Makefile   |   1 +
  arch/x86/kernel/setup.c|   3 +
  arch/x86/kernel/slaunch.c  | 525 +
  drivers/iommu/intel/dmar.c |   4 +
  4 files changed, 533 insertions(+)
  create mode 100644 arch/x86/kernel/slaunch.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5d128167e2e2..b35ca99ab0a0 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -76,6 +76,7 @@ obj-$(CONFIG_X86_32)  += tls.o
  obj-$(CONFIG_IA32_EMULATION)  += tls.o
  obj-y += step.o
  obj-$(CONFIG_INTEL_TXT)   += tboot.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o


Hmm... should that be CONFIG_X86_SECURE_LAUNCH?

Just asking...


It could be if you would like. I guess we just thought it was implied 
given its location.


Ross



BR, Jarkko






Re: [PATCH v9 09/19] x86: Secure Launch kernel late boot stub

2024-06-04 Thread ross . philipson

On 6/4/24 12:58 PM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

The routine slaunch_setup is called out of the x86 specific setup_arch()
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.


"memory protections" is not too helpful tbh.

Better to describe very briefly the VT-d usage.


We can enhance the commit message and talk about VT-d usage and what 
PMRs are and do.


Thanks



BR, Jarkko





Re: [PATCH v9 08/19] x86: Secure Launch kernel early boot stub

2024-06-04 Thread ross . philipson

On 6/4/24 1:54 PM, Ard Biesheuvel wrote:

On Tue, 4 Jun 2024 at 19:34,  wrote:


On 6/4/24 10:27 AM, Ard Biesheuvel wrote:

On Tue, 4 Jun 2024 at 19:24,  wrote:


On 5/31/24 6:33 AM, Ard Biesheuvel wrote:

On Fri, 31 May 2024 at 13:00, Ard Biesheuvel  wrote:


Hello Ross,

On Fri, 31 May 2024 at 03:32, Ross Philipson  wrote:


The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
Documentation/arch/x86/boot.rst|  21 +
arch/x86/boot/compressed/Makefile  |   3 +-
arch/x86/boot/compressed/head_64.S |  30 +
arch/x86/boot/compressed/kernel_info.S |  34 ++
arch/x86/boot/compressed/sl_main.c | 577 
arch/x86/boot/compressed/sl_stub.S | 725 +
arch/x86/include/asm/msr-index.h   |   5 +
arch/x86/include/uapi/asm/bootparam.h  |   1 +
arch/x86/kernel/asm-offsets.c  |  20 +
9 files changed, 1415 insertions(+), 1 deletion(-)
create mode 100644 arch/x86/boot/compressed/sl_main.c
create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
   - If 1, KASLR enabled.
   - If 0, KASLR disabled.

+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
  Bit 5 (write): QUIET_FLAG

   - If 0, print early messages.
@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4

  This field contains maximal allowed type for setup_data and 
setup_indirect structs.

+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://urldefense.com/v3/__https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf__;!!ACWV5N9M2RV99hQ!Mng0gnPhOYZ8D02t1rYwQfY6U3uWaypJyd1T2rsWz3QNHr9GhIZ9ANB_-cgPExxX0e0KmCpda-3VX8Fj$
+



Could we just repaint this field as the offset relative to the start
of kernel_info rather than relative to the start of the image? That
way, there is no need for patch #1, and given that the consumer of
this field accesses it via kernel_info, I wouldn't expect any issues
in applying this offset to obtain the actual address.



The Image Checksum
==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 9189a0e28686..9076a248d4b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a

-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o

$(obj)/vmlinux: $(vmlinux-objs-y) FORCE
   $(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..803c9e2e6d85 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
   pushq   $0
   popfq

+#ifdef CONFIG_SECURE_LAUNCH
+   /* Ensure the relocation region is cove

Re: [PATCH v9 08/19] x86: Secure Launch kernel early boot stub

2024-06-04 Thread ross . philipson

On 6/4/24 12:56 PM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
  Documentation/arch/x86/boot.rst|  21 +
  arch/x86/boot/compressed/Makefile  |   3 +-
  arch/x86/boot/compressed/head_64.S |  30 +
  arch/x86/boot/compressed/kernel_info.S |  34 ++
  arch/x86/boot/compressed/sl_main.c | 577 
  arch/x86/boot/compressed/sl_stub.S | 725 +
  arch/x86/include/asm/msr-index.h   |   5 +
  arch/x86/include/uapi/asm/bootparam.h  |   1 +
  arch/x86/kernel/asm-offsets.c  |  20 +
  9 files changed, 1415 insertions(+), 1 deletion(-)
  create mode 100644 arch/x86/boot/compressed/sl_main.c
  create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
- If 1, KASLR enabled.
- If 0, KASLR disabled.
  
+  Bit 2 (kernel internal): SLAUNCH_FLAG

+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
Bit 5 (write): QUIET_FLAG
  
  	- If 0, print early messages.

@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4
  
This field contains maximal allowed type for setup_data and setup_indirect structs.
  
+	=

+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://urldefense.com/v3/__https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf__;!!ACWV5N9M2RV99hQ!KPXGsFBxHXv1-jmHhyS3xHCC_3EnOUbN697TXyjlZlNw9YPQG9tQKo2s-6cn-HEv3gP_PpQqGwTYYQT3jxE$
+
  
  The Image Checksum

  ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 9189a0e28686..9076a248d4b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
  vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
  vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
  
-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o $(obj)/early_sha256.o

+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o
  
  $(obj)/vmlinux: $(vmlinux-objs-y) FORCE

$(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..803c9e2e6d85 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
pushq   $0
popfq
  
+#ifdef CONFIG_SECURE_LAUNCH

+   /* Ensure the relocation region is coverd by a PMR */
+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq   sl_check_region
+#endif
+
  /*
   * Copy the compressed kernel to the end of our buffer
   * where decompression in place becomes safe.
@@ -462,6 +469,29 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
shrq$3, %rcx
rep stosq
  
+#ifdef CONFIG_SECURE_LAUNCH

+   /*
+* Have to do the final early sl stub work in 64b area.
+*
+* *** NOTE ***
+*
+* Several boot params get used before we get a chance to measure

Re: [PATCH v9 06/19] x86: Add early SHA-1 support for Secure Launch early measurements

2024-06-04 Thread ross . philipson

On 6/4/24 11:52 AM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

From: "Daniel P. Smith" 

For better or worse, Secure Launch needs SHA-1 and SHA-256. The
choice of hashes used lie with the platform firmware, not with
software, and is often outside of the users control.

Even if we'd prefer to use SHA-256-only, if firmware elected to start us
with the SHA-1 and SHA-256 backs active, we still need SHA-1 to parse
the TPM event log thus far, and deliberately cap the SHA-1 PCRs in order
to safely use SHA-256 for everything else.

The SHA-1 code here has its origins in the code from the main kernel:

commit c4d5b9ffa31f ("crypto: sha1 - implement base layer for SHA-1")

A modified version of this code was introduced to the lib/crypto/sha1.c
to bring it in line with the SHA-256 code and allow it to be pulled into the
setup kernel in the same manner as SHA-256 is.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
  arch/x86/boot/compressed/Makefile |  2 +
  arch/x86/boot/compressed/early_sha1.c | 12 
  include/crypto/sha1.h |  1 +
  lib/crypto/sha1.c | 81 +++
  4 files changed, 96 insertions(+)
  create mode 100644 arch/x86/boot/compressed/early_sha1.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index e9522c6893be..3307ebef4e1b 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,6 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
  vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
  vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
  
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o

+
  $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
$(call if_changed,ld)
  
diff --git a/arch/x86/boot/compressed/early_sha1.c b/arch/x86/boot/compressed/early_sha1.c

new file mode 100644
index ..8a9b904a73ab
--- /dev/null
+++ b/arch/x86/boot/compressed/early_sha1.c
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2024 Apertus Solutions, LLC.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../../../../lib/crypto/sha1.c"

}

Yep, make sense. Thinking only that should this be just sha1.c.

Comparing this to mainly drivers/firmware/efi/tpm.c, which is not
early_tpm.c where the early actually probably would make more sense
than here. Here sha1 primitive is just needed.

This is definitely a nitpick but why carry a prefix that is not
that useful, right?


I am not 100% sure what you mean here, sorry. Could you clarify about 
the prefix? Do you mean why did we choose early_*? There was precedent 
for doing that like early_serial_console.c.





diff --git a/include/crypto/sha1.h b/include/crypto/sha1.h
index 044ecea60ac8..d715dd5332e1 100644
--- a/include/crypto/sha1.h
+++ b/include/crypto/sha1.h
@@ -42,5 +42,6 @@ extern int crypto_sha1_finup(struct shash_desc *desc, const 
u8 *data,
  #define SHA1_WORKSPACE_WORDS  16
  void sha1_init(__u32 *buf);
  void sha1_transform(__u32 *digest, const char *data, __u32 *W);
+void sha1(const u8 *data, unsigned int len, u8 *out);
  
  #endif /* _CRYPTO_SHA1_H */

diff --git a/lib/crypto/sha1.c b/lib/crypto/sha1.c
index 1aebe7be9401..10152125b338 100644
--- a/lib/crypto/sha1.c
+++ b/lib/crypto/sha1.c
@@ -137,4 +137,85 @@ void sha1_init(__u32 *buf)
  }
  EXPORT_SYMBOL(sha1_init);
  
+static void __sha1_transform(u32 *digest, const char *data)

+{
+   u32 ws[SHA1_WORKSPACE_WORDS];
+
+   sha1_transform(digest, data, ws);
+
+   memzero_explicit(ws, sizeof(ws));


For the sake of future reference I'd carry always some inline comment
with any memzero_explicit() call site.


We can do that.




+}
+
+static void sha1_update(struct sha1_state *sctx, const u8 *data, unsigned int 
len)
+{
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+   sctx->count += len;
+
+   if (likely((partial + len) >= SHA1_BLOCK_SIZE)) {



if (unlikely((partial + len) < SHA1_BLOCK_SIZE))
goto out;

?


We could do it that way. I guess it would cut down in indenting. I defer 
to Daniel Smith on this...





+   int blocks;
+
+   if (partial) {
+   int p = SHA1_BLOCK_SIZE - partial;
+
+   memcpy(sctx->buffer + partial, data, p);
+   data += p;
+   len -= p;
+
+   __sha1_transform(sctx->state, sctx->buffer);
+   }
+
+   blocks = len / SHA1_BLOCK_SIZE;
+   len %= SHA1_BLOCK_SIZE;
+
+   if (blocks) {
+   while (blocks--) {
+   __sha1_transform(sctx->state, data);
+   data += SHA1_BLOCK_SIZE;
+   }
+   }
+

Re: [PATCH v9 05/19] x86: Secure Launch main header file

2024-06-04 Thread ross . philipson

On 6/4/24 11:24 AM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

Introduce the main Secure Launch header file used in the early SL stub
and the early setup code.

Signed-off-by: Ross Philipson 


Right and anything AMD specific should also have legit references. I
actually always compare to the spec when I review, so not just
nitpicking really.

I'm actually bit confused, is this usable both Intel and AMD in the
current state? Might be just that have not had time follow this for some
time.


This header file mostly has TXT/Intel specific definitions in it right 
now but that is just because TXT is the first target architecture. I am 
working on the AMD side of things as we speak and yes, AMD specific 
definitions will go in here and later ARM specific definitions too.


If you would like to see say a comment block with links to relevant 
specifications in this header file, that can be done and they will be 
added as new support is added.


Thanks
Ross



BR, Jarkko






Re: [PATCH v9 04/19] x86: Secure Launch Resource Table header file

2024-06-04 Thread ross . philipson

On 6/4/24 11:21 AM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

Introduce the Secure Launch Resource Table which forms the formal
interface between the pre and post launch code.

Signed-off-by: Ross Philipson 


If a uarch specific, I'd appreciate Intel SDM reference here so that I
can look it up and compare. Like in section granularity.


This table is meant to not be architecture specific though it can 
contain architecture specific sub-entities. E.g. there is a TXT specific 
table and in the future there will be an AMD and ARM one (and hopefully 
some others). I hope that addresses what you are pointing out or maybe I 
don't fully understand what you mean here...


Thanks
Ross



BR, Jarkko





Re: [PATCH v9 01/19] x86/boot: Place kernel_info at a fixed offset

2024-06-04 Thread ross . philipson

On 6/4/24 11:18 AM, Jarkko Sakkinen wrote:

On Fri May 31, 2024 at 4:03 AM EEST, Ross Philipson wrote:

From: Arvind Sankar 

There are use cases for storing the offset of a symbol in kernel_info.
For example, the trenchboot series [0] needs to store the offset of the
Measured Launch Environment header in kernel_info.


So either there are other use cases that you should enumerate, or just
be straight and state that this is done for Trenchboot.


The kernel_info concept came about because of the work we were doing on 
TrenchBoot but it was not done for TrenchBoot. It was a collaborative 
effort between the TrenchBoot team and H. Peter Anvin at Intel. He 
actually envisioned it being useful elsewhere. If you find the original 
commits for it (that went in stand-alone) from Daniel Kiper, there is a 
fair amount of detail what kernel_info is supposed to be and should be 
used for.




I believe latter is the case, and there is no reason to project further.
If it does not interfere kernel otherwise, it should be fine just by
that.

Also I believe that it is written as Trenchboot, without "series" ;-)
Think when writing commit message that it will some day be part of the
commit log, not a series flying in the air.

Sorry for the nitpicks but better to be punctual and that way also
transparent as possible, right?


No problem. We submit the patch sets to get feedback :)

Thanks for the feedback.





Since commit (note: commit ID from tip/master)

commit 527afc212231 ("x86/boot: Check that there are no run-time relocations")

run-time relocations are not allowed in the compressed kernel, so simply
using the symbol in kernel_info, as

.long   symbol

will cause a linker error because this is not position-independent.

With kernel_info being a separate object file and in a different section
from startup_32, there is no way to calculate the offset of a symbol
from the start of the image in a position-independent way.

To enable such use cases, put kernel_info into its own section which is


"To allow Trenchboot to access the fields of kernel_info..."

Much more understandable.


placed at a predetermined offset (KERNEL_INFO_OFFSET) via the linker
script. This will allow calculating the symbol offset in a
position-independent way, by adding the offset from the start of
kernel_info to KERNEL_INFO_OFFSET.

Ensure that kernel_info is aligned, and use the SYM_DATA.* macros
instead of bare labels. This stores the size of the kernel_info
structure in the ELF symbol table.


Aligned to which boundary and short explanation why to that boundary,
i.e. state the obvious if you bring it up anyway here.

Just seems to be progressing pretty well so taking my eye glass and
looking into nitty gritty details...


So a lot of this is up in the air if you read the responses between us 
and Ard Biesheuvel. It would be nice to get rid of the part where 
kernel_info is forced to a fixed offset in the setup kernel.


Thanks
Ross



BR, Jarkko





Re: [PATCH v9 08/19] x86: Secure Launch kernel early boot stub

2024-06-04 Thread ross . philipson

On 6/4/24 10:27 AM, Ard Biesheuvel wrote:

On Tue, 4 Jun 2024 at 19:24,  wrote:


On 5/31/24 6:33 AM, Ard Biesheuvel wrote:

On Fri, 31 May 2024 at 13:00, Ard Biesheuvel  wrote:


Hello Ross,

On Fri, 31 May 2024 at 03:32, Ross Philipson  wrote:


The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
   Documentation/arch/x86/boot.rst|  21 +
   arch/x86/boot/compressed/Makefile  |   3 +-
   arch/x86/boot/compressed/head_64.S |  30 +
   arch/x86/boot/compressed/kernel_info.S |  34 ++
   arch/x86/boot/compressed/sl_main.c | 577 
   arch/x86/boot/compressed/sl_stub.S | 725 +
   arch/x86/include/asm/msr-index.h   |   5 +
   arch/x86/include/uapi/asm/bootparam.h  |   1 +
   arch/x86/kernel/asm-offsets.c  |  20 +
   9 files changed, 1415 insertions(+), 1 deletion(-)
   create mode 100644 arch/x86/boot/compressed/sl_main.c
   create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
  - If 1, KASLR enabled.
  - If 0, KASLR disabled.

+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
 Bit 5 (write): QUIET_FLAG

  - If 0, print early messages.
@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4

 This field contains maximal allowed type for setup_data and setup_indirect 
structs.

+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://urldefense.com/v3/__https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf__;!!ACWV5N9M2RV99hQ!Mng0gnPhOYZ8D02t1rYwQfY6U3uWaypJyd1T2rsWz3QNHr9GhIZ9ANB_-cgPExxX0e0KmCpda-3VX8Fj$
+



Could we just repaint this field as the offset relative to the start
of kernel_info rather than relative to the start of the image? That
way, there is no need for patch #1, and given that the consumer of
this field accesses it via kernel_info, I wouldn't expect any issues
in applying this offset to obtain the actual address.



   The Image Checksum
   ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 9189a0e28686..9076a248d4b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
   vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
   vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a

-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o

   $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
  $(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..803c9e2e6d85 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
  pushq   $0
  popfq

+#ifdef CONFIG_SECURE_LAUNCH
+   /* Ensure the relocation region is coverd by a PMR */


covered


+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   

Re: [PATCH v9 08/19] x86: Secure Launch kernel early boot stub

2024-06-04 Thread ross . philipson

On 5/31/24 7:04 AM, Ard Biesheuvel wrote:

On Fri, 31 May 2024 at 15:33, Ard Biesheuvel  wrote:


On Fri, 31 May 2024 at 13:00, Ard Biesheuvel  wrote:


Hello Ross,

On Fri, 31 May 2024 at 03:32, Ross Philipson  wrote:


The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
  Documentation/arch/x86/boot.rst|  21 +
  arch/x86/boot/compressed/Makefile  |   3 +-
  arch/x86/boot/compressed/head_64.S |  30 +
  arch/x86/boot/compressed/kernel_info.S |  34 ++
  arch/x86/boot/compressed/sl_main.c | 577 
  arch/x86/boot/compressed/sl_stub.S | 725 +
  arch/x86/include/asm/msr-index.h   |   5 +
  arch/x86/include/uapi/asm/bootparam.h  |   1 +
  arch/x86/kernel/asm-offsets.c  |  20 +
  9 files changed, 1415 insertions(+), 1 deletion(-)
  create mode 100644 arch/x86/boot/compressed/sl_main.c
  create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
 - If 1, KASLR enabled.
 - If 0, KASLR disabled.

+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
Bit 5 (write): QUIET_FLAG

 - If 0, print early messages.
@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4

This field contains maximal allowed type for setup_data and setup_indirect 
structs.

+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://urldefense.com/v3/__https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf__;!!ACWV5N9M2RV99hQ!ItP96GzpIqxa7wGXth63mmzkWPbBgoixpG3-Gj1tlstBVkReH_hagE-Sa_E6DwcvYtu5xLOwbVWeeXGa$
+



Could we just repaint this field as the offset relative to the start
of kernel_info rather than relative to the start of the image? That
way, there is no need for patch #1, and given that the consumer of
this field accesses it via kernel_info, I wouldn't expect any issues
in applying this offset to obtain the actual address.



  The Image Checksum
  ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 9189a0e28686..9076a248d4b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
  vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
  vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a

-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o

  $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
 $(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..803c9e2e6d85 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
 pushq   $0
 popfq

+#ifdef CONFIG_SECURE_LAUNCH
+   /* Ensure the relocation region is coverd by a PMR */


covered


+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq   sl_check_region
+#endif
+
  /*
   * Cop

Re: [PATCH v9 08/19] x86: Secure Launch kernel early boot stub

2024-06-04 Thread ross . philipson

On 5/31/24 6:33 AM, Ard Biesheuvel wrote:

On Fri, 31 May 2024 at 13:00, Ard Biesheuvel  wrote:


Hello Ross,

On Fri, 31 May 2024 at 03:32, Ross Philipson  wrote:


The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
  Documentation/arch/x86/boot.rst|  21 +
  arch/x86/boot/compressed/Makefile  |   3 +-
  arch/x86/boot/compressed/head_64.S |  30 +
  arch/x86/boot/compressed/kernel_info.S |  34 ++
  arch/x86/boot/compressed/sl_main.c | 577 
  arch/x86/boot/compressed/sl_stub.S | 725 +
  arch/x86/include/asm/msr-index.h   |   5 +
  arch/x86/include/uapi/asm/bootparam.h  |   1 +
  arch/x86/kernel/asm-offsets.c  |  20 +
  9 files changed, 1415 insertions(+), 1 deletion(-)
  create mode 100644 arch/x86/boot/compressed/sl_main.c
  create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
 - If 1, KASLR enabled.
 - If 0, KASLR disabled.

+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
Bit 5 (write): QUIET_FLAG

 - If 0, print early messages.
@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4

This field contains maximal allowed type for setup_data and setup_indirect 
structs.

+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://urldefense.com/v3/__https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf__;!!ACWV5N9M2RV99hQ!Mng0gnPhOYZ8D02t1rYwQfY6U3uWaypJyd1T2rsWz3QNHr9GhIZ9ANB_-cgPExxX0e0KmCpda-3VX8Fj$
+



Could we just repaint this field as the offset relative to the start
of kernel_info rather than relative to the start of the image? That
way, there is no need for patch #1, and given that the consumer of
this field accesses it via kernel_info, I wouldn't expect any issues
in applying this offset to obtain the actual address.



  The Image Checksum
  ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 9189a0e28686..9076a248d4b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
  vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
  vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a

-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o

  $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
 $(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..803c9e2e6d85 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
 pushq   $0
 popfq

+#ifdef CONFIG_SECURE_LAUNCH
+   /* Ensure the relocation region is coverd by a PMR */


covered


+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq   sl_check_region
+#endif
+
  /*
   * Copy the compressed kernel to the end of our buffer
   * 

Re: [PATCH v9 19/19] x86: EFI stub DRTM launch support for Secure Launch

2024-06-04 Thread ross . philipson

On 5/31/24 4:09 AM, Ard Biesheuvel wrote:

On Fri, 31 May 2024 at 03:32, Ross Philipson  wrote:


This support allows the DRTM launch to be initiated after an EFI stub
launch of the Linux kernel is done. This is accomplished by providing
a handler to jump to when a Secure Launch is in progress. This has to be
called after the EFI stub does Exit Boot Services.

Signed-off-by: Ross Philipson 


Just some minor remarks below. The overall approach in this patch
looks fine now.



---
  drivers/firmware/efi/libstub/x86-stub.c | 98 +
  1 file changed, 98 insertions(+)

diff --git a/drivers/firmware/efi/libstub/x86-stub.c 
b/drivers/firmware/efi/libstub/x86-stub.c
index d5a8182cf2e1..a1143d006202 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -9,6 +9,8 @@
  #include 
  #include 
  #include 
+#include 
+#include 

  #include 
  #include 
@@ -830,6 +832,97 @@ static efi_status_t efi_decompress_kernel(unsigned long 
*kernel_entry)
 return efi_adjust_memory_range_protection(addr, kernel_text_size);
  }

+#if (IS_ENABLED(CONFIG_SECURE_LAUNCH))


IS_ENABLED() is mostly used for C conditionals not CPP ones.

It would be nice if this #if could be dropped, and replaced with ... (see below)



+static bool efi_secure_launch_update_boot_params(struct slr_table *slrt,
+struct boot_params 
*boot_params)
+{
+   struct slr_entry_intel_info *txt_info;
+   struct slr_entry_policy *policy;
+   struct txt_os_mle_data *os_mle;
+   bool updated = false;
+   int i;
+
+   txt_info = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
+   if (!txt_info)
+   return false;
+
+   os_mle = txt_os_mle_data_start((void *)txt_info->txt_heap);
+   if (!os_mle)
+   return false;
+
+   os_mle->boot_params_addr = (u32)(u64)boot_params;
+


Why is this safe?


The size of the boot_params_addr is a holdover from the legacy boot 
world when boot params were always loaded at a low address. We will 
increase the size of the field.





+   policy = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_ENTRY_POLICY);
+   if (!policy)
+   return false;
+
+   for (i = 0; i < policy->nr_entries; i++) {
+   if (policy->policy_entries[i].entity_type == 
SLR_ET_BOOT_PARAMS) {
+   policy->policy_entries[i].entity = (u64)boot_params;
+   updated = true;
+   break;
+   }
+   }
+
+   /*
+* If this is a PE entry into EFI stub the mocked up boot params will
+* be missing some of the setup header data needed for the second stage
+* of the Secure Launch boot.
+*/
+   if (image) {
+   struct setup_header *hdr = (struct setup_header *)((u8 
*)image->image_base + 0x1f1);


Could we use something other than a bare 0x1f1 constant here? struct
boot_params has a struct setup_header at the correct offset, so with
some casting of offsetof() use, we can make this look a lot more self
explanatory.


Yes we can do this.





+   u64 cmdline_ptr, hi_val;
+
+   boot_params->hdr.setup_sects = hdr->setup_sects;
+   boot_params->hdr.syssize = hdr->syssize;
+   boot_params->hdr.version = hdr->version;
+   boot_params->hdr.loadflags = hdr->loadflags;
+   boot_params->hdr.kernel_alignment = hdr->kernel_alignment;
+   boot_params->hdr.min_alignment = hdr->min_alignment;
+   boot_params->hdr.xloadflags = hdr->xloadflags;
+   boot_params->hdr.init_size = hdr->init_size;
+   boot_params->hdr.kernel_info_offset = hdr->kernel_info_offset;
+   hi_val = boot_params->ext_cmd_line_ptr;


We have efi_set_u64_split() for this.


Ok I will use that then.




+   cmdline_ptr = boot_params->hdr.cmd_line_ptr | hi_val << 32;
+   boot_params->hdr.cmdline_size = strlen((const char 
*)cmdline_ptr);;
+   }
+
+   return updated;
+}
+
+static void efi_secure_launch(struct boot_params *boot_params)
+{
+   struct slr_entry_dl_info *dlinfo;
+   efi_guid_t guid = SLR_TABLE_GUID;
+   dl_handler_func handler_callback;
+   struct slr_table *slrt;
+


... a C conditional here, e.g.,

if (!IS_ENABLED(CONFIG_SECURE_LAUNCH))
 return;

The difference is that all the code will get compile test coverage
every time, instead of only in configs that enable
CONFIG_SECURE_LAUNCH.

This significantly reduces the risk that your stuff will get broken
inadvertently.


Understood, I will address these as you suggest.




+   /*
+* The presence of this table indicated a Secure Launch
+* is being requested.
+*/
+   slrt = (struct slr_table *)get_efi_confi

Re: [PATCH v9 08/19] x86: Secure Launch kernel early boot stub

2024-06-04 Thread ross . philipson

On 5/31/24 4:00 AM, Ard Biesheuvel wrote:

Hello Ross,


Hi Ard,



On Fri, 31 May 2024 at 03:32, Ross Philipson  wrote:


The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
  Documentation/arch/x86/boot.rst|  21 +
  arch/x86/boot/compressed/Makefile  |   3 +-
  arch/x86/boot/compressed/head_64.S |  30 +
  arch/x86/boot/compressed/kernel_info.S |  34 ++
  arch/x86/boot/compressed/sl_main.c | 577 
  arch/x86/boot/compressed/sl_stub.S | 725 +
  arch/x86/include/asm/msr-index.h   |   5 +
  arch/x86/include/uapi/asm/bootparam.h  |   1 +
  arch/x86/kernel/asm-offsets.c  |  20 +
  9 files changed, 1415 insertions(+), 1 deletion(-)
  create mode 100644 arch/x86/boot/compressed/sl_main.c
  create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
 - If 1, KASLR enabled.
 - If 0, KASLR disabled.

+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
Bit 5 (write): QUIET_FLAG

 - If 0, print early messages.
@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4

This field contains maximal allowed type for setup_data and setup_indirect 
structs.

+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://urldefense.com/v3/__https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf__;!!ACWV5N9M2RV99hQ!MdqTLUxfB5YUUOB1qyhkN8TF9HAoGW7yy_2qwZGWz8eb73CYkDXY-h1SeyaZjwzsSWz408D3LgDD8Zmw$
+



Could we just repaint this field as the offset relative to the start
of kernel_info rather than relative to the start of the image? That
way, there is no need for patch #1, and given that the consumer of
this field accesses it via kernel_info, I wouldn't expect any issues
in applying this offset to obtain the actual address.


What you suggest here may be possible with respect to the location of 
the MLE header itself, we need to give that more thought. The real issue 
though is covered in my response below concerning the fields in the MLE 
header.






  The Image Checksum
  ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 9189a0e28686..9076a248d4b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
  vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
  vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a

-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o

  $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
 $(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..803c9e2e6d85 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
 pushq   $0
 popfq

+#ifdef CONFIG_SECURE_LAUNCH
+   /* Ensure the relocation region is coverd by a PMR */


covered

Re: [PATCH v9 06/19] x86: Add early SHA-1 support for Secure Launch early measurements

2024-05-31 Thread ross . philipson

On 5/30/24 7:16 PM, Eric Biggers wrote:

On Thu, May 30, 2024 at 06:03:18PM -0700, Ross Philipson wrote:

From: "Daniel P. Smith" 

For better or worse, Secure Launch needs SHA-1 and SHA-256. The
choice of hashes used lie with the platform firmware, not with
software, and is often outside of the users control.

Even if we'd prefer to use SHA-256-only, if firmware elected to start us
with the SHA-1 and SHA-256 backs active, we still need SHA-1 to parse
the TPM event log thus far, and deliberately cap the SHA-1 PCRs in order
to safely use SHA-256 for everything else.

The SHA-1 code here has its origins in the code from the main kernel:

commit c4d5b9ffa31f ("crypto: sha1 - implement base layer for SHA-1")

A modified version of this code was introduced to the lib/crypto/sha1.c
to bring it in line with the SHA-256 code and allow it to be pulled into the
setup kernel in the same manner as SHA-256 is.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 


Thanks.  This explanation doesn't seem to have made it into the actual code or
documentation.  Can you please get it into a more permanent location?

Also, can you point to where the "deliberately cap the SHA-1 PCRs" thing happens
in the code?

That paragraph is also phrased as a hypothetical, "Even if we'd prefer to use
SHA-256-only".  That implies that you do not, in fact, prefer SHA-256 only.  Is
that the case?  Sure, maybe there are situations where you *have* to use SHA-1,
but why would you not at least *prefer* SHA-256?


Yes those are fair points. We will address them and indicate we prefer 
SHA-256 or better.





/*
  * An implementation of SHA-1's compression function.  Don't use in new code!
  * You shouldn't be using SHA-1, and even if you *have* to use SHA-1, this 
isn't
  * the correct way to hash something with SHA-1 (use crypto_shash instead).
  */
#define SHA1_DIGEST_WORDS   (SHA1_DIGEST_SIZE / 4)
#define SHA1_WORKSPACE_WORDS16
void sha1_init(__u32 *buf);
void sha1_transform(__u32 *digest, const char *data, __u32 *W);
+void sha1(const u8 *data, unsigned int len, u8 *out);

 > Also, the comment above needs to be updated.


Ack, will address.

Thank you



- Eric





[PATCH v9 18/19] x86: Secure Launch late initcall platform module

2024-05-30 Thread Ross Philipson
From: "Daniel P. Smith" 

The Secure Launch platform module is a late init module. During the
init call, the TPM event log is read and measurements taken in the
early boot stub code are located. These measurements are extended
into the TPM PCRs using the mainline TPM kernel driver.

The platform module also registers the securityfs nodes to allow
access to TXT register fields on Intel along with the fetching of
and writing events to the late launch TPM log.

Signed-off-by: Daniel P. Smith 
Signed-off-by: garnetgrimm 
Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/slmodule.c | 513 +
 2 files changed, 514 insertions(+)
 create mode 100644 arch/x86/kernel/slmodule.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index b35ca99ab0a0..f2432c4a747a 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -77,6 +77,7 @@ obj-$(CONFIG_IA32_EMULATION)  += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
 obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slmodule.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-y  += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/slmodule.c b/arch/x86/kernel/slmodule.c
new file mode 100644
index ..0e1354e3a914
--- /dev/null
+++ b/arch/x86/kernel/slmodule.c
@@ -0,0 +1,513 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup, securityfs exposure and finalization.
+ *
+ * Copyright (c) 2024 Apertus Solutions, LLC
+ * Copyright (c) 2024 Assured Information Security, Inc.
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ *
+ * Co-developed-by: Garnet T. Grimm 
+ * Signed-off-by: Garnet T. Grimm 
+ * Signed-off-by: Daniel P. Smith 
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+/*
+ * The macro DECLARE_TXT_PUB_READ_U is used to read values from the TXT
+ * public registers as unsigned values.
+ */
+#define DECLARE_TXT_PUB_READ_U(size, fmt, msg_size)\
+static ssize_t txt_pub_read_u##size(unsigned int offset,   \
+   loff_t *read_offset,\
+   size_t read_len,\
+   char __user *buf)   \
+{  \
+   char msg_buffer[msg_size];  \
+   u##size reg_value = 0;  \
+   void __iomem *txt;  \
+   \
+   txt = ioremap(TXT_PUB_CONFIG_REGS_BASE, \
+   TXT_NR_CONFIG_PAGES * PAGE_SIZE);   \
+   if (!txt)   \
+   return -EFAULT; \
+   memcpy_fromio(®_value, txt + offset, sizeof(u##size));   \
+   iounmap(txt);   \
+   snprintf(msg_buffer, msg_size, fmt, reg_value); \
+   return simple_read_from_buffer(buf, read_len, read_offset,  \
+   &msg_buffer, msg_size); \
+}
+
+DECLARE_TXT_PUB_READ_U(8, "%#04x\n", 6);
+DECLARE_TXT_PUB_READ_U(32, "%#010x\n", 12);
+DECLARE_TXT_PUB_READ_U(64, "%#018llx\n", 20);
+
+#define DECLARE_TXT_FOPS(reg_name, reg_offset, reg_size)   \
+static ssize_t txt_##reg_name##_read(struct file *flip,
\
+   char __user *buf, size_t read_len, loff_t *read_offset) \
+{  \
+   return txt_pub_read_u##reg_size(reg_offset, read_offset,\
+   read_len, buf); \
+}  \
+static const struct file_operations reg_name##_ops = { \
+   .read = txt_##reg_name##_read,  \
+}
+
+DECLARE_TXT_FOPS(sts, TXT_CR_STS, 64);
+DECLARE_TXT_FOPS(ests, TXT_CR_ESTS, 8);
+DECLARE_TXT_FOPS(errorcode, TXT_CR_ERRORCODE, 32);
+DECLARE_TXT_FOPS(didvid, TXT_CR_DIDVID, 64);
+DECLARE_TXT_FOPS(e2sts, TXT_CR_E2STS, 64);
+DECLARE_TXT_FOPS(ver_emif, TXT_CR_VER_EMIF, 32);
+DECLARE_TXT_FOPS(scratchpad, TXT_CR_SCRATCHPAD, 64);
+
+/*
+ * Securityfs exposure
+ */
+struct memfile {
+   char *name;
+   void *addr;
+   size_t size;
+};
+
+static struct memfile sl_evtlog = {"

[PATCH v9 16/19] tpm: Add ability to set the preferred locality the TPM chip uses

2024-05-30 Thread Ross Philipson
Curently the locality is hard coded to 0 but for DRTM support, access
is needed to localities 1 through 4.

Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm-chip.c  | 24 +++-
 drivers/char/tpm/tpm-interface.c | 15 +++
 drivers/char/tpm/tpm.h   |  1 +
 include/linux/tpm.h  |  4 
 4 files changed, 43 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm-chip.c b/drivers/char/tpm/tpm-chip.c
index 854546000c92..73eac54d61fb 100644
--- a/drivers/char/tpm/tpm-chip.c
+++ b/drivers/char/tpm/tpm-chip.c
@@ -44,7 +44,7 @@ static int tpm_request_locality(struct tpm_chip *chip)
if (!chip->ops->request_locality)
return 0;
 
-   rc = chip->ops->request_locality(chip, 0);
+   rc = chip->ops->request_locality(chip, chip->pref_locality);
if (rc < 0)
return rc;
 
@@ -143,6 +143,27 @@ void tpm_chip_stop(struct tpm_chip *chip)
 }
 EXPORT_SYMBOL_GPL(tpm_chip_stop);
 
+/**
+ * tpm_chip_preferred_locality() - set the TPM chip preferred locality to open
+ * @chip:  a TPM chip to use
+ * @locality:   the preferred locality
+ *
+ * Return:
+ * * true  - Preferred locality set
+ * * false - Invalid locality specified
+ */
+bool tpm_chip_preferred_locality(struct tpm_chip *chip, int locality)
+{
+   if (locality < 0 || locality >=TPM_MAX_LOCALITY)
+   return false;
+
+   mutex_lock(&chip->tpm_mutex);
+   chip->pref_locality = locality;
+   mutex_unlock(&chip->tpm_mutex);
+   return true;
+}
+EXPORT_SYMBOL_GPL(tpm_chip_preferred_locality);
+
 /**
  * tpm_try_get_ops() - Get a ref to the tpm_chip
  * @chip: Chip to ref
@@ -374,6 +395,7 @@ struct tpm_chip *tpm_chip_alloc(struct device *pdev,
}
 
chip->locality = -1;
+   chip->pref_locality = 0;
return chip;
 
 out:
diff --git a/drivers/char/tpm/tpm-interface.c b/drivers/char/tpm/tpm-interface.c
index 5da134f12c9a..35f14ccecf0e 100644
--- a/drivers/char/tpm/tpm-interface.c
+++ b/drivers/char/tpm/tpm-interface.c
@@ -274,6 +274,21 @@ int tpm_is_tpm2(struct tpm_chip *chip)
 }
 EXPORT_SYMBOL_GPL(tpm_is_tpm2);
 
+/**
+ * tpm_preferred_locality() - set the TPM chip preferred locality to open
+ * @chip:  a TPM chip to use
+ * @locality:   the preferred locality
+ *
+ * Return:
+ * * true  - Preferred locality set
+ * * false - Invalid locality specified
+ */
+bool tpm_preferred_locality(struct tpm_chip *chip, int locality)
+{
+   return tpm_chip_preferred_locality(chip, locality);
+}
+EXPORT_SYMBOL_GPL(tpm_preferred_locality);
+
 /**
  * tpm_pcr_read - read a PCR value from SHA1 bank
  * @chip:  a &struct tpm_chip instance, %NULL for the default chip
diff --git a/drivers/char/tpm/tpm.h b/drivers/char/tpm/tpm.h
index 6b8b9956ba69..be465422d3fa 100644
--- a/drivers/char/tpm/tpm.h
+++ b/drivers/char/tpm/tpm.h
@@ -267,6 +267,7 @@ static inline void tpm_msleep(unsigned int delay_msec)
 int tpm_chip_bootstrap(struct tpm_chip *chip);
 int tpm_chip_start(struct tpm_chip *chip);
 void tpm_chip_stop(struct tpm_chip *chip);
+bool tpm_chip_preferred_locality(struct tpm_chip *chip, int locality);
 struct tpm_chip *tpm_find_get_ops(struct tpm_chip *chip);
 
 struct tpm_chip *tpm_chip_alloc(struct device *dev,
diff --git a/include/linux/tpm.h b/include/linux/tpm.h
index 363f7078c3a9..935a3457d7c8 100644
--- a/include/linux/tpm.h
+++ b/include/linux/tpm.h
@@ -219,6 +219,9 @@ struct tpm_chip {
u8 null_ec_key_y[EC_PT_SZ];
struct tpm2_auth *auth;
 #endif
+
+   /* preferred locality - default 0 */
+   int pref_locality;
 };
 
 #define TPM_HEADER_SIZE10
@@ -461,6 +464,7 @@ static inline u32 tpm2_rc_value(u32 rc)
 #if defined(CONFIG_TCG_TPM) || defined(CONFIG_TCG_TPM_MODULE)
 
 extern int tpm_is_tpm2(struct tpm_chip *chip);
+extern bool tpm_preferred_locality(struct tpm_chip *chip, int locality);
 extern __must_check int tpm_try_get_ops(struct tpm_chip *chip);
 extern void tpm_put_ops(struct tpm_chip *chip);
 extern ssize_t tpm_transmit_cmd(struct tpm_chip *chip, struct tpm_buf *buf,
-- 
2.39.3




[PATCH v9 15/19] tpm: Make locality requests return consistent values

2024-05-30 Thread Ross Philipson
From: "Daniel P. Smith" 

The function tpm_tis_request_locality() is expected to return the locality
value that was requested, or a negative error code upon failure. If it is called
while locality_count of struct tis_data is non-zero, no actual locality request
will be sent. Because the ret variable is initially set to 0, the
locality_count will still get increased, and the function will return 0. For a
caller, this would indicate that locality 0 was successfully requested and not
the state changes just mentioned.

Additionally, the function __tpm_tis_request_locality() provides inconsistent
error codes. It will provide either a failed IO write or a -1 should it have
timed out waiting for locality request to succeed.

This commit changes __tpm_tis_request_locality() to return valid negative error
codes to reflect the reason it fails. It then adjusts the return value check in
tpm_tis_request_locality() to check for a non-negative return value before
incrementing locality_cout. In addition, the initial value of the ret value is
set to a negative error to ensure the check does not pass if
__tpm_tis_request_locality() is not called.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm_tis_core.c | 11 +++
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
index 9fb53bb3e73f..685bdeadec51 100644
--- a/drivers/char/tpm/tpm_tis_core.c
+++ b/drivers/char/tpm/tpm_tis_core.c
@@ -208,7 +208,7 @@ static int __tpm_tis_request_locality(struct tpm_chip 
*chip, int l)
 again:
timeout = stop - jiffies;
if ((long)timeout <= 0)
-   return -1;
+   return -EBUSY;
rc = wait_event_interruptible_timeout(priv->int_queue,
  (check_locality
   (chip, l)),
@@ -227,18 +227,21 @@ static int __tpm_tis_request_locality(struct tpm_chip 
*chip, int l)
tpm_msleep(TPM_TIMEOUT);
} while (time_before(jiffies, stop));
}
-   return -1;
+   return -EBUSY;
 }
 
 static int tpm_tis_request_locality(struct tpm_chip *chip, int l)
 {
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
-   int ret = 0;
+   int ret = -EBUSY;
+
+   if (l < 0 || l > TPM_MAX_LOCALITY)
+   return -EINVAL;
 
mutex_lock(&priv->locality_count_mutex);
if (priv->locality_count == 0)
ret = __tpm_tis_request_locality(chip, l);
-   if (!ret)
+   if (ret >= 0)
priv->locality_count++;
mutex_unlock(&priv->locality_count_mutex);
return ret;
-- 
2.39.3




[PATCH v9 14/19] tpm: Ensure tpm is in known state at startup

2024-05-30 Thread Ross Philipson
From: "Daniel P. Smith" 

When tis core initializes, it assumes all localities are closed. There
are cases when this may not be the case. This commit addresses this by
ensuring all localities are closed before initializing begins.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm_tis_core.c | 11 ++-
 include/linux/tpm.h |  6 ++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
index 7c1761bd6000..9fb53bb3e73f 100644
--- a/drivers/char/tpm/tpm_tis_core.c
+++ b/drivers/char/tpm/tpm_tis_core.c
@@ -1104,7 +1104,7 @@ int tpm_tis_core_init(struct device *dev, struct 
tpm_tis_data *priv, int irq,
u32 intmask;
u32 clkrun_val;
u8 rid;
-   int rc, probe;
+   int rc, probe, i;
struct tpm_chip *chip;
 
chip = tpmm_chip_alloc(dev, &tpm_tis);
@@ -1166,6 +1166,15 @@ int tpm_tis_core_init(struct device *dev, struct 
tpm_tis_data *priv, int irq,
goto out_err;
}
 
+   /*
+* There are environments, like Intel TXT, that may leave a TPM
+* locality open. Close all localities to start from a known state.
+*/
+   for (i = 0; i <= TPM_MAX_LOCALITY; i++) {
+   if (check_locality(chip, i))
+   tpm_tis_relinquish_locality(chip, i);
+   }
+
/* Take control of the TPM's interrupt hardware and shut it off */
rc = tpm_tis_read32(priv, TPM_INT_ENABLE(priv->locality), &intmask);
if (rc < 0)
diff --git a/include/linux/tpm.h b/include/linux/tpm.h
index c17e4efbb2e5..363f7078c3a9 100644
--- a/include/linux/tpm.h
+++ b/include/linux/tpm.h
@@ -147,6 +147,12 @@ struct tpm_chip_seqops {
  */
 #define TPM2_MAX_CONTEXT_SIZE 4096
 
+/*
+ * The maximum locality (0 - 4) for a TPM, as defined in section 3.2 of the
+ * Client Platform Profile Specification.
+ */
+#define TPM_MAX_LOCALITY   4
+
 struct tpm_chip {
struct device dev;
struct device devs;
-- 
2.39.3




[PATCH v9 11/19] kexec: Secure Launch kexec SEXIT support

2024-05-30 Thread Ross Philipson
Prior to running the next kernel via kexec, the Secure Launch code
closes down private SMX resources and does an SEXIT. This allows the
next kernel to start normally without any issues starting the APs etc.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/slaunch.c | 73 +++
 kernel/kexec_core.c   |  4 +++
 2 files changed, 77 insertions(+)

diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index 48c9ca78e241..f35b4ba433fa 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -523,3 +523,76 @@ void __init slaunch_setup_txt(void)
 
pr_info("Intel TXT setup complete\n");
 }
+
+static inline void smx_getsec_sexit(void)
+{
+   asm volatile ("getsec\n"
+ : : "a" (SMX_X86_GETSEC_SEXIT));
+}
+
+/*
+ * Used during kexec and on reboot paths to finalize the TXT state
+ * and do an SEXIT exiting the DRTM and disabling SMX mode.
+ */
+void slaunch_finalize(int do_sexit)
+{
+   u64 one = TXT_REGVALUE_ONE, val;
+   void __iomem *config;
+
+   if ((slaunch_get_flags() & (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT)) !=
+   (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
+   return;
+
+   config = ioremap(TXT_PRIV_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT private reqs\n");
+   return;
+   }
+
+   /* Clear secrets bit for SEXIT */
+   memcpy_toio(config + TXT_CR_CMD_NO_SECRETS, &one, sizeof(one));
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Unlock memory configurations */
+   memcpy_toio(config + TXT_CR_CMD_UNLOCK_MEM_CONFIG, &one, sizeof(one));
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Close the TXT private register space */
+   memcpy_toio(config + TXT_CR_CMD_CLOSE_PRIVATE, &one, sizeof(one));
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   /*
+* Calls to iounmap are not being done because of the state of the
+* system this late in the kexec process. Local IRQs are disabled and
+* iounmap causes a TLB flush which in turn causes a warning. Leaving
+* thse mappings is not an issue since the next kernel is going to
+* completely re-setup memory management.
+*/
+
+   /* Map public registers and do a final read fence */
+   config = ioremap(TXT_PUB_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT public reqs\n");
+   return;
+   }
+
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   pr_emerg("TXT clear secrets bit and unlock memory complete.\n");
+
+   if (!do_sexit)
+   return;
+
+   if (smp_processor_id() != 0)
+   panic("Error TXT SEXIT must be called on CPU 0\n");
+
+   /* In case SMX mode was disabled, enable it for SEXIT */
+   cr4_set_bits(X86_CR4_SMXE);
+
+   /* Do the SEXIT SMX operation */
+   smx_getsec_sexit();
+
+   pr_info("TXT SEXIT complete.\n");
+}
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index 0e96f6b24344..ba2fd1c0ddd9 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -40,6 +40,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1046,6 +1047,9 @@ int kernel_kexec(void)
cpu_hotplug_enable();
pr_notice("Starting new kernel\n");
machine_shutdown();
+
+   /* Finalize TXT registers and do SEXIT */
+   slaunch_finalize(1);
}
 
kmsg_dump(KMSG_DUMP_SHUTDOWN);
-- 
2.39.3




[PATCH v9 13/19] tpm: Protect against locality counter underflow

2024-05-30 Thread Ross Philipson
From: "Daniel P. Smith" 

Commit 933bfc5ad213 introduced the use of a locality counter to control when a
locality request is allowed to be sent to the TPM. In the commit, the counter
is indiscriminately decremented. Thus creating a situation for an integer
underflow of the counter.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
Reported-by: Kanth Ghatraju 
Fixes: 933bfc5ad213 ("tpm, tpm: Implement usage counter for locality")
---
 drivers/char/tpm/tpm_tis_core.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/char/tpm/tpm_tis_core.c b/drivers/char/tpm/tpm_tis_core.c
index 176cd8dbf1db..7c1761bd6000 100644
--- a/drivers/char/tpm/tpm_tis_core.c
+++ b/drivers/char/tpm/tpm_tis_core.c
@@ -180,7 +180,8 @@ static int tpm_tis_relinquish_locality(struct tpm_chip 
*chip, int l)
struct tpm_tis_data *priv = dev_get_drvdata(&chip->dev);
 
mutex_lock(&priv->locality_count_mutex);
-   priv->locality_count--;
+   if (priv->locality_count > 0)
+   priv->locality_count--;
if (priv->locality_count == 0)
__tpm_tis_relinquish_locality(priv, l);
mutex_unlock(&priv->locality_count_mutex);
-- 
2.39.3




[PATCH v9 09/19] x86: Secure Launch kernel late boot stub

2024-05-30 Thread Ross Philipson
The routine slaunch_setup is called out of the x86 specific setup_arch()
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.

For TXT, this code also reserves the original compressed kernel setup
area where the APs were left looping so that this memory cannot be used.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/setup.c|   3 +
 arch/x86/kernel/slaunch.c  | 525 +
 drivers/iommu/intel/dmar.c |   4 +
 4 files changed, 533 insertions(+)
 create mode 100644 arch/x86/kernel/slaunch.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5d128167e2e2..b35ca99ab0a0 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -76,6 +76,7 @@ obj-$(CONFIG_X86_32)  += tls.o
 obj-$(CONFIG_IA32_EMULATION)   += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-y  += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 55a1fc332e20..31d1e6b9bd36 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -936,6 +937,8 @@ void __init setup_arch(char **cmdline_p)
early_gart_iommu_check();
 #endif
 
+   slaunch_setup_txt();
+
/*
 * partially used pages are not usable - thus
 * we are rounding upwards:
diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
new file mode 100644
index ..48c9ca78e241
--- /dev/null
+++ b/arch/x86/kernel/slaunch.c
@@ -0,0 +1,525 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup and finalization support.
+ *
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+static u32 sl_flags __ro_after_init;
+static struct sl_ap_wake_info ap_wake_info __ro_after_init;
+static u64 evtlog_addr __ro_after_init;
+static u32 evtlog_size __ro_after_init;
+static u64 vtd_pmr_lo_size __ro_after_init;
+
+/* This should be plenty of room */
+static u8 txt_dmar[PAGE_SIZE] __aligned(16);
+
+/*
+ * Get the Secure Launch flags that indicate what kind of launch is being done.
+ * E.g. a TXT launch is in progress or no Secure Launch is happening.
+ */
+u32 slaunch_get_flags(void)
+{
+   return sl_flags;
+}
+
+/*
+ * Return the AP wakeup information used in the SMP boot code to start up
+ * the APs that are parked using MONITOR/MWAIT.
+ */
+struct sl_ap_wake_info *slaunch_get_ap_wake_info(void)
+{
+   return &ap_wake_info;
+}
+
+/*
+ * On Intel platforms, TXT passes a safe copy of the DMAR ACPI table to the
+ * DRTM. The DRTM is supposed to use this instead of the one found in the
+ * ACPI tables.
+ */
+struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table_header 
*dmar)
+{
+   /* The DMAR is only stashed and provided via TXT on Intel systems */
+   if (memcmp(txt_dmar, "DMAR", 4))
+   return dmar;
+
+   return (struct acpi_table_header *)(txt_dmar);
+}
+
+/*
+ * If running within a TXT established DRTM, this is the proper way to reset
+ * the system if a failure occurs or a security issue is found.
+ */
+void __noreturn slaunch_txt_reset(void __iomem *txt,
+ const char *msg, u64 error)
+{
+   u64 one = 1, val;
+
+   pr_err("%s", msg);
+
+   /*
+* This performs a TXT reset with a sticky error code. The reads of
+* TXT_CR_E2STS act as barriers.
+*/
+   memcpy_toio(txt + TXT_CR_ERRORCODE, &error, sizeof(error));
+   memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_NO_SECRETS, &one, sizeof(one));
+   memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_UNLOCK_MEM_CONFIG, &one, sizeof(one));
+   memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_RESET, &one, sizeof(one));
+
+   for ( ; ; )
+   asm volatile ("hlt");
+
+   unreachable();
+}
+
+/*
+ * The TXT heap is too big to map all at once with early_ioremap
+ * so it is done a table at a time.
+ */
+static void __init *txt_early_get_heap_table(void __iomem *txt, u32 type,
+u32 bytes)
+{
+   u64 base, size, offset = 0;
+   void *heap;
+   int i;
+
+

[PATCH v9 17/19] tpm: Add sysfs interface to allow setting and querying the preferred locality

2024-05-30 Thread Ross Philipson
Expose a sysfs interface to allow user mode to set and query the preferred
locality for the TPM chip.

Signed-off-by: Ross Philipson 
---
 drivers/char/tpm/tpm-sysfs.c | 30 ++
 1 file changed, 30 insertions(+)

diff --git a/drivers/char/tpm/tpm-sysfs.c b/drivers/char/tpm/tpm-sysfs.c
index 94231f052ea7..5f4a966a4599 100644
--- a/drivers/char/tpm/tpm-sysfs.c
+++ b/drivers/char/tpm/tpm-sysfs.c
@@ -324,6 +324,34 @@ static ssize_t null_name_show(struct device *dev, struct 
device_attribute *attr,
 static DEVICE_ATTR_RO(null_name);
 #endif
 
+static ssize_t preferred_locality_show(struct device *dev,
+  struct device_attribute *attr, char *buf)
+{
+   struct tpm_chip *chip = to_tpm_chip(dev);
+
+   return sprintf(buf, "%d\n", chip->pref_locality);
+}
+
+static ssize_t preferred_locality_store(struct device *dev, struct 
device_attribute *attr,
+   const char *buf, size_t count)
+{
+   struct tpm_chip *chip = to_tpm_chip(dev);
+   unsigned int locality;
+
+   if (kstrtouint(buf, 0, &locality))
+   return -ERANGE;
+
+   if (locality >= TPM_MAX_LOCALITY)
+   return -ERANGE;
+
+   if (tpm_chip_preferred_locality(chip, (int)locality))
+   return count;
+   else
+   return 0;
+}
+
+static DEVICE_ATTR_RW(preferred_locality);
+
 static struct attribute *tpm1_dev_attrs[] = {
&dev_attr_pubek.attr,
&dev_attr_pcrs.attr,
@@ -336,6 +364,7 @@ static struct attribute *tpm1_dev_attrs[] = {
&dev_attr_durations.attr,
&dev_attr_timeouts.attr,
&dev_attr_tpm_version_major.attr,
+   &dev_attr_preferred_locality.attr,
NULL,
 };
 
@@ -344,6 +373,7 @@ static struct attribute *tpm2_dev_attrs[] = {
 #ifdef CONFIG_TCG_TPM2_HMAC
&dev_attr_null_name.attr,
 #endif
+   &dev_attr_preferred_locality.attr,
NULL
 };
 
-- 
2.39.3




[PATCH v9 12/19] reboot: Secure Launch SEXIT support on reboot paths

2024-05-30 Thread Ross Philipson
If the MLE kernel is being powered off, rebooted or halted,
then SEXIT must be called. Note that the SEXIT GETSEC leaf
can only be called after a machine_shutdown() has been done on
these paths. The machine_shutdown() is not called on a few paths
like when poweroff action does not have a poweroff callback (into
ACPI code) or when an emergency reset is done. In these cases,
just the TXT registers are finalized but SEXIT is skipped.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/reboot.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index f3130f762784..66060fdb0822 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -12,6 +12,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -766,6 +767,7 @@ static void native_machine_restart(char *__unused)
 
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
__machine_emergency_restart(0);
 }
 
@@ -776,6 +778,9 @@ static void native_machine_halt(void)
 
tboot_shutdown(TB_SHUTDOWN_HALT);
 
+   /* SEXIT done after machine_shutdown() to meet TXT requirements */
+   slaunch_finalize(1);
+
stop_this_cpu(NULL);
 }
 
@@ -784,8 +789,12 @@ static void native_machine_power_off(void)
if (kernel_can_power_off()) {
if (!reboot_force)
machine_shutdown();
+   slaunch_finalize(!reboot_force);
do_kernel_power_off();
+   } else {
+   slaunch_finalize(0);
}
+
/* A fallback in case there is no PM info available */
tboot_shutdown(TB_SHUTDOWN_HALT);
 }
@@ -813,6 +822,7 @@ void machine_shutdown(void)
 
 void machine_emergency_restart(void)
 {
+   slaunch_finalize(0);
__machine_emergency_restart(1);
 }
 
-- 
2.39.3




[PATCH v9 10/19] x86: Secure Launch SMP bringup support

2024-05-30 Thread Ross Philipson
On Intel, the APs are left in a well documented state after TXT performs
the late launch. Specifically they cannot have #INIT asserted on them so
a standard startup via INIT/SIPI/SIPI cannot be performed. Instead the
early SL stub code uses MONITOR and MWAIT to park the APs. The realmode/init.c
code updates the jump address for the waiting APs with the location of the
Secure Launch entry point in the RM piggy after it is loaded and fixed up.
As the APs are woken up by writing the monitor, the APs jump to the Secure
Launch entry point in the RM piggy which mimics what the real mode code would
do then jumps to the standard RM piggy protected mode entry point.

Signed-off-by: Ross Philipson 
---
 arch/x86/include/asm/realmode.h  |  3 ++
 arch/x86/kernel/smpboot.c| 58 +++-
 arch/x86/realmode/init.c |  3 ++
 arch/x86/realmode/rm/header.S|  3 ++
 arch/x86/realmode/rm/trampoline_64.S | 32 +++
 5 files changed, 97 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/realmode.h b/arch/x86/include/asm/realmode.h
index 87e5482acd0d..339b48e2543d 100644
--- a/arch/x86/include/asm/realmode.h
+++ b/arch/x86/include/asm/realmode.h
@@ -38,6 +38,9 @@ struct real_mode_header {
 #ifdef CONFIG_X86_64
u32 machine_real_restart_seg;
 #endif
+#ifdef CONFIG_SECURE_LAUNCH
+   u32 sl_trampoline_start32;
+#endif
 };
 
 /* This must match data at realmode/rm/trampoline_{32,64}.S */
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index 0c35207320cb..adb521221d6c 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -60,6 +60,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -868,6 +869,56 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
return 0;
 }
 
+#ifdef CONFIG_SECURE_LAUNCH
+
+static bool slaunch_is_txt_launch(void)
+{
+   if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) ==
+   (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
+   return true;
+
+   return false;
+}
+
+/*
+ * TXT AP startup is quite different than normal. The APs cannot have #INIT
+ * asserted on them or receive SIPIs. The early Secure Launch code has parked
+ * the APs using monitor/mwait. This will wake the APs by writing the monitor
+ * and have them jump to the protected mode code in the rmpiggy where the rest
+ * of the SMP boot of the AP will proceed normally.
+ */
+static void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+   struct sl_ap_wake_info *ap_wake_info;
+   struct sl_ap_stack_and_monitor *stack_monitor = NULL;
+
+   ap_wake_info = slaunch_get_ap_wake_info();
+
+   stack_monitor = (struct sl_ap_stack_and_monitor 
*)__va(ap_wake_info->ap_wake_block +
+  
ap_wake_info->ap_stacks_offset);
+
+   for (unsigned int i = TXT_MAX_CPUS - 1; i >= 0; i--) {
+   if (stack_monitor[i].apicid == apicid) {
+   /* Write the monitor */
+   stack_monitor[i].monitor = 1;
+   break;
+   }
+   }
+}
+
+#else
+
+static inline bool slaunch_is_txt_launch(void)
+{
+   return false;
+}
+
+static inline void slaunch_wakeup_cpu_from_txt(int cpu, int apicid)
+{
+}
+
+#endif  /* !CONFIG_SECURE_LAUNCH */
+
 /*
  * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
  * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
@@ -877,7 +928,7 @@ int common_cpu_up(unsigned int cpu, struct task_struct 
*idle)
 static int do_boot_cpu(u32 apicid, int cpu, struct task_struct *idle)
 {
unsigned long start_ip = real_mode_header->trampoline_start;
-   int ret;
+   int ret = 0;
 
 #ifdef CONFIG_X86_64
/* If 64-bit wakeup method exists, use the 64-bit mode trampoline IP */
@@ -922,12 +973,15 @@ static int do_boot_cpu(u32 apicid, int cpu, struct 
task_struct *idle)
 
/*
 * Wake up a CPU in difference cases:
+* - Intel TXT DRTM launch uses its own method to wake the APs
 * - Use a method from the APIC driver if one defined, with wakeup
 *   straight to 64-bit mode preferred over wakeup to RM.
 * Otherwise,
 * - Use an INIT boot APIC message
 */
-   if (apic->wakeup_secondary_cpu_64)
+   if (slaunch_is_txt_launch())
+   slaunch_wakeup_cpu_from_txt(cpu, apicid);
+   else if (apic->wakeup_secondary_cpu_64)
ret = apic->wakeup_secondary_cpu_64(apicid, start_ip);
else if (apic->wakeup_secondary_cpu)
ret = apic->wakeup_secondary_cpu(apicid, start_ip);
diff --git a/arch/x86/realmode/init.c b/arch/x86/realmode/init.c
index f9bc444a3064..d95776cb30d3 100644
--- a/arch/x86/realmode/init.c
+++ b/arch/x86/realmode/init.c
@@ -4,6 +4,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 

[PATCH v9 07/19] x86: Add early SHA-256 support for Secure Launch early measurements

2024-05-30 Thread Ross Philipson
From: "Daniel P. Smith" 

The SHA-256 algorithm is necessary to measure configuration information into
the TPM as early as possible before using the values. This implementation
uses the established approach of #including the SHA-256 libraries directly in
the code since the compressed kernel is not uncompressed at this point.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile   | 2 +-
 arch/x86/boot/compressed/early_sha256.c | 6 ++
 2 files changed, 7 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/boot/compressed/early_sha256.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 3307ebef4e1b..9189a0e28686 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,7 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
 
 $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
$(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/early_sha256.c 
b/arch/x86/boot/compressed/early_sha256.c
new file mode 100644
index ..293742a90ddc
--- /dev/null
+++ b/arch/x86/boot/compressed/early_sha256.c
@@ -0,0 +1,6 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2024 Apertus Solutions, LLC
+ */
+
+#include "../../../../lib/crypto/sha256.c"
-- 
2.39.3




[PATCH v9 02/19] Documentation/x86: Secure Launch kernel documentation

2024-05-30 Thread Ross Philipson
Introduce background, overview and configuration/ABI information
for the Secure Launch kernel feature.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
Reviewed-by: Bagas Sanjaya 
---
 Documentation/security/index.rst  |   1 +
 .../security/launch-integrity/index.rst   |  11 +
 .../security/launch-integrity/principles.rst  | 320 ++
 .../secure_launch_details.rst | 587 ++
 .../secure_launch_overview.rst| 227 +++
 5 files changed, 1146 insertions(+)
 create mode 100644 Documentation/security/launch-integrity/index.rst
 create mode 100644 Documentation/security/launch-integrity/principles.rst
 create mode 100644 
Documentation/security/launch-integrity/secure_launch_details.rst
 create mode 100644 
Documentation/security/launch-integrity/secure_launch_overview.rst

diff --git a/Documentation/security/index.rst b/Documentation/security/index.rst
index 59f8fc106cb0..56e31fb3d91f 100644
--- a/Documentation/security/index.rst
+++ b/Documentation/security/index.rst
@@ -19,3 +19,4 @@ Security Documentation
digsig
landlock
secrets/index
+   launch-integrity/index
diff --git a/Documentation/security/launch-integrity/index.rst 
b/Documentation/security/launch-integrity/index.rst
new file mode 100644
index ..838328186dd2
--- /dev/null
+++ b/Documentation/security/launch-integrity/index.rst
@@ -0,0 +1,11 @@
+=
+System Launch Integrity documentation
+=
+
+.. toctree::
+   :maxdepth: 1
+
+   principles
+   secure_launch_overview
+   secure_launch_details
+
diff --git a/Documentation/security/launch-integrity/principles.rst 
b/Documentation/security/launch-integrity/principles.rst
new file mode 100644
index ..68a415aec545
--- /dev/null
+++ b/Documentation/security/launch-integrity/principles.rst
@@ -0,0 +1,320 @@
+.. SPDX-License-Identifier: GPL-2.0
+.. Copyright © 2019-2023 Daniel P. Smith 
+
+===
+System Launch Integrity
+===
+
+:Author: Daniel P. Smith
+:Date: October 2023
+
+This document serves to establish a common understanding of what is system
+launch, the integrity concern for system launch, and why using a Root of Trust
+(RoT) from a Dynamic Launch may be desired. Throughout this document
+terminology from the Trusted Computing Group (TCG) and National Institute for
+Science and Technology (NIST) is used to ensure a vendor natural language is
+used to describe and reference security-related concepts.
+
+System Launch
+=
+
+There is a tendency to only consider the classical power-on boot as the only
+means to launch an Operating System (OS) on a computer system, but in fact most
+modern processors support two methods to launch the system. To provide clarity
+a common definition of a system launch should be established. This definition
+is that a during a single power life cycle of a system, a System Launch
+consists of an initialization event, typically in hardware, that is followed by
+an executing software payload that takes the system from the initialized state
+to a running state. Driven by the Trusted Computing Group (TCG) architecture,
+modern processors are able to support two methods to launch a system, these two
+types of system launch are known as Static Launch and Dynamic Launch.
+
+Static Launch
+-
+
+Static launch is the system launch associated with the power cycle of the CPU.
+Thus, static launch refers to the classical power-on boot where the
+initialization event is the release of the CPU from reset and the system
+firmware is the software payload that brings the system up to a running state.
+Since static launch is the system launch associated with the beginning of the
+power lifecycle of a system, it is therefore a fixed, one-time system launch.
+It is because of this that static launch is referred to and thought of as being
+"static".
+
+Dynamic Launch
+--
+
+Modern CPUs architectures provides a mechanism to re-initialize the system to a
+"known good" state without requiring a power event. This re-initialization
+event is the event for a dynamic launch and is referred to as the Dynamic
+Launch Event (DLE). The DLE functions by accepting a software payload, referred
+to as the Dynamic Configuration Environment (DCE), that execution is handed to
+after the DLE is invoked. The DCE is responsible for bringing the system back
+to a running state. Since the dynamic launch is not tied to a power event like
+the static launch, this enables a dynamic launch to be initiated at any time
+and multiple times during a single power life cycle. This dynamism is the
+reasoning behind referring to this system launch as being dynamic.
+
+Because a dynamic launch can be conducted at any time during a single power
+life cycle, they are classified into one of two types, an early launch or a
+late launch.
+
+:Early Launch: W

[PATCH v9 05/19] x86: Secure Launch main header file

2024-05-30 Thread Ross Philipson
Introduce the main Secure Launch header file used in the early SL stub
and the early setup code.

Signed-off-by: Ross Philipson 
---
 include/linux/slaunch.h | 542 
 1 file changed, 542 insertions(+)
 create mode 100644 include/linux/slaunch.h

diff --git a/include/linux/slaunch.h b/include/linux/slaunch.h
new file mode 100644
index ..90a7f22ddbdd
--- /dev/null
+++ b/include/linux/slaunch.h
@@ -0,0 +1,542 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Main Secure Launch header file.
+ *
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLAUNCH_H
+#define _LINUX_SLAUNCH_H
+
+/*
+ * Secure Launch Defined State Flags
+ */
+#define SL_FLAG_ACTIVE 0x0001
+#define SL_FLAG_ARCH_SKINIT0x0002
+#define SL_FLAG_ARCH_TXT   0x0004
+
+/*
+ * Secure Launch CPU Type
+ */
+#define SL_CPU_AMD 1
+#define SL_CPU_INTEL   2
+
+#if IS_ENABLED(CONFIG_SECURE_LAUNCH)
+
+#define __SL32_CS  0x0008
+#define __SL32_DS  0x0010
+
+/*
+ * Intel Safer Mode Extensions (SMX)
+ *
+ * Intel SMX provides a programming interface to establish a Measured Launched
+ * Environment (MLE). The measurement and protection mechanisms supported by 
the
+ * capabilities of an Intel Trusted Execution Technology (TXT) platform. SMX is
+ * the processor’s programming interface in an Intel TXT platform.
+ *
+ * See Intel SDM Volume 2 - 6.1 "Safer Mode Extensions Reference"
+ */
+
+/*
+ * SMX GETSEC Leaf Functions
+ */
+#define SMX_X86_GETSEC_SEXIT   5
+#define SMX_X86_GETSEC_SMCTRL  7
+#define SMX_X86_GETSEC_WAKEUP  8
+
+/*
+ * Intel Trusted Execution Technology MMIO Registers Banks
+ */
+#define TXT_PUB_CONFIG_REGS_BASE   0xfed3
+#define TXT_PRIV_CONFIG_REGS_BASE  0xfed2
+#define TXT_NR_CONFIG_PAGES ((TXT_PUB_CONFIG_REGS_BASE - \
+ TXT_PRIV_CONFIG_REGS_BASE) >> PAGE_SHIFT)
+
+/*
+ * Intel Trusted Execution Technology (TXT) Registers
+ */
+#define TXT_CR_STS 0x
+#define TXT_CR_ESTS0x0008
+#define TXT_CR_ERRORCODE   0x0030
+#define TXT_CR_CMD_RESET   0x0038
+#define TXT_CR_CMD_CLOSE_PRIVATE   0x0048
+#define TXT_CR_DIDVID  0x0110
+#define TXT_CR_VER_EMIF0x0200
+#define TXT_CR_CMD_UNLOCK_MEM_CONFIG   0x0218
+#define TXT_CR_SINIT_BASE  0x0270
+#define TXT_CR_SINIT_SIZE  0x0278
+#define TXT_CR_MLE_JOIN0x0290
+#define TXT_CR_HEAP_BASE   0x0300
+#define TXT_CR_HEAP_SIZE   0x0308
+#define TXT_CR_SCRATCHPAD  0x0378
+#define TXT_CR_CMD_OPEN_LOCALITY1  0x0380
+#define TXT_CR_CMD_CLOSE_LOCALITY1 0x0388
+#define TXT_CR_CMD_OPEN_LOCALITY2  0x0390
+#define TXT_CR_CMD_CLOSE_LOCALITY2 0x0398
+#define TXT_CR_CMD_SECRETS 0x08e0
+#define TXT_CR_CMD_NO_SECRETS  0x08e8
+#define TXT_CR_E2STS   0x08f0
+
+/* TXT default register value */
+#define TXT_REGVALUE_ONE   0x1ULL
+
+/* TXTCR_STS status bits */
+#define TXT_SENTER_DONE_STSBIT(0)
+#define TXT_SEXIT_DONE_STS BIT(1)
+
+/*
+ * SINIT/MLE Capabilities Field Bit Definitions
+ */
+#define TXT_SINIT_MLE_CAP_WAKE_GETSEC  0
+#define TXT_SINIT_MLE_CAP_WAKE_MONITOR 1
+
+/*
+ * OS/MLE Secure Launch Specific Definitions
+ */
+#define TXT_OS_MLE_STRUCT_VERSION  1
+#define TXT_OS_MLE_MAX_VARIABLE_MTRRS  32
+
+/*
+ * TXT Heap Table Enumeration
+ */
+#define TXT_BIOS_DATA_TABLE1
+#define TXT_OS_MLE_DATA_TABLE  2
+#define TXT_OS_SINIT_DATA_TABLE3
+#define TXT_SINIT_MLE_DATA_TABLE   4
+#define TXT_SINIT_TABLE_MAXTXT_SINIT_MLE_DATA_TABLE
+
+/*
+ * Secure Launch Defined Error Codes used in MLE-initiated TXT resets.
+ *
+ * TXT Specification
+ * Appendix I ACM Error Codes
+ */
+#define SL_ERROR_GENERIC   0xc0008001
+#define SL_ERROR_TPM_INIT  0xc0008002
+#define SL_ERROR_TPM_INVALID_LOG20 0xc0008003
+#define SL_ERROR_TPM_LOGGING_FAILED0xc0008004
+#define SL_ERROR_REGION_STRADDLE_4GB   0xc0008005
+#define SL_ERROR_TPM_EXTEND0xc0008006
+#define SL_ERROR_MTRR_INV_VCNT 0xc0008007
+#define SL_ERROR_MTRR_INV_DEF_TYPE 0xc0008008
+#define SL_ERROR_MTRR_INV_BASE 0xc0008009
+#define SL_ERROR_MTRR_INV_MASK 0xc000800a
+#define SL_ERROR_MSR_INV_MISC_EN   0xc000800b
+#define SL_ERROR_INV_AP_INTERRUPT  0xc000800c
+#define SL_ERROR_INTEGER_OVERFLOW  0xc000800d
+#define SL_ERROR_HEAP_WALK 0xc000800e
+#define SL_ERROR_HEAP_MAP  0xc000800f
+#define SL_ERROR_REGION_ABOVE_4GB  0xc0008010
+#define SL_ERROR_HEAP_INVALID_DMAR 0xc0008011
+#define SL_ERROR_HEAP_DMAR_SIZE0xc0008012
+#define SL_ERROR_HEAP_DMAR_MAP 0xc0008013
+#define SL_ERROR_HI_PMR_BASE   0xc0008014
+#define SL_ERROR_HI_PMR_SIZE   0xc

[PATCH v9 08/19] x86: Secure Launch kernel early boot stub

2024-05-30 Thread Ross Philipson
The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
 Documentation/arch/x86/boot.rst|  21 +
 arch/x86/boot/compressed/Makefile  |   3 +-
 arch/x86/boot/compressed/head_64.S |  30 +
 arch/x86/boot/compressed/kernel_info.S |  34 ++
 arch/x86/boot/compressed/sl_main.c | 577 
 arch/x86/boot/compressed/sl_stub.S | 725 +
 arch/x86/include/asm/msr-index.h   |   5 +
 arch/x86/include/uapi/asm/bootparam.h  |   1 +
 arch/x86/kernel/asm-offsets.c  |  20 +
 9 files changed, 1415 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/boot/compressed/sl_main.c
 create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index 4fd492cb4970..295cdf9bcbdb 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
- If 1, KASLR enabled.
- If 0, KASLR disabled.
 
+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the setup kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
   Bit 5 (write): QUIET_FLAG
 
- If 0, print early messages.
@@ -1028,6 +1036,19 @@ Offset/size: 0x000c/4
 
   This field contains maximal allowed type for setup_data and setup_indirect 
structs.
 
+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
+
 
 The Image Checksum
 ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 9189a0e28686..9076a248d4b4 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o
 
 $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
$(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index 1dcb794c5479..803c9e2e6d85 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -420,6 +420,13 @@ SYM_CODE_START(startup_64)
pushq   $0
popfq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   /* Ensure the relocation region is coverd by a PMR */
+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq   sl_check_region
+#endif
+
 /*
  * Copy the compressed kernel to the end of our buffer
  * where decompression in place becomes safe.
@@ -462,6 +469,29 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
shrq$3, %rcx
rep stosq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   /*
+* Have to do the final early sl stub work in 64b area.
+*
+* *** NOTE ***
+*
+* Several boot params get used before we get a chance to measure
+* them in this call. This is a known issue and we currently don't
+* have a solution. The scratch field doesn't matter. There is no
+* obvious way to do anything about the use of kernel_alignment or
+* init_size though these seem low

[PATCH v9 06/19] x86: Add early SHA-1 support for Secure Launch early measurements

2024-05-30 Thread Ross Philipson
From: "Daniel P. Smith" 

For better or worse, Secure Launch needs SHA-1 and SHA-256. The
choice of hashes used lie with the platform firmware, not with
software, and is often outside of the users control.

Even if we'd prefer to use SHA-256-only, if firmware elected to start us
with the SHA-1 and SHA-256 backs active, we still need SHA-1 to parse
the TPM event log thus far, and deliberately cap the SHA-1 PCRs in order
to safely use SHA-256 for everything else.

The SHA-1 code here has its origins in the code from the main kernel:

commit c4d5b9ffa31f ("crypto: sha1 - implement base layer for SHA-1")

A modified version of this code was introduced to the lib/crypto/sha1.c
to bring it in line with the SHA-256 code and allow it to be pulled into the
setup kernel in the same manner as SHA-256 is.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile |  2 +
 arch/x86/boot/compressed/early_sha1.c | 12 
 include/crypto/sha1.h |  1 +
 lib/crypto/sha1.c | 81 +++
 4 files changed, 96 insertions(+)
 create mode 100644 arch/x86/boot/compressed/early_sha1.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index e9522c6893be..3307ebef4e1b 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,6 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o
+
 $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
$(call if_changed,ld)
 
diff --git a/arch/x86/boot/compressed/early_sha1.c 
b/arch/x86/boot/compressed/early_sha1.c
new file mode 100644
index ..8a9b904a73ab
--- /dev/null
+++ b/arch/x86/boot/compressed/early_sha1.c
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2024 Apertus Solutions, LLC.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../../../../lib/crypto/sha1.c"
diff --git a/include/crypto/sha1.h b/include/crypto/sha1.h
index 044ecea60ac8..d715dd5332e1 100644
--- a/include/crypto/sha1.h
+++ b/include/crypto/sha1.h
@@ -42,5 +42,6 @@ extern int crypto_sha1_finup(struct shash_desc *desc, const 
u8 *data,
 #define SHA1_WORKSPACE_WORDS   16
 void sha1_init(__u32 *buf);
 void sha1_transform(__u32 *digest, const char *data, __u32 *W);
+void sha1(const u8 *data, unsigned int len, u8 *out);
 
 #endif /* _CRYPTO_SHA1_H */
diff --git a/lib/crypto/sha1.c b/lib/crypto/sha1.c
index 1aebe7be9401..10152125b338 100644
--- a/lib/crypto/sha1.c
+++ b/lib/crypto/sha1.c
@@ -137,4 +137,85 @@ void sha1_init(__u32 *buf)
 }
 EXPORT_SYMBOL(sha1_init);
 
+static void __sha1_transform(u32 *digest, const char *data)
+{
+   u32 ws[SHA1_WORKSPACE_WORDS];
+
+   sha1_transform(digest, data, ws);
+
+   memzero_explicit(ws, sizeof(ws));
+}
+
+static void sha1_update(struct sha1_state *sctx, const u8 *data, unsigned int 
len)
+{
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+   sctx->count += len;
+
+   if (likely((partial + len) >= SHA1_BLOCK_SIZE)) {
+   int blocks;
+
+   if (partial) {
+   int p = SHA1_BLOCK_SIZE - partial;
+
+   memcpy(sctx->buffer + partial, data, p);
+   data += p;
+   len -= p;
+
+   __sha1_transform(sctx->state, sctx->buffer);
+   }
+
+   blocks = len / SHA1_BLOCK_SIZE;
+   len %= SHA1_BLOCK_SIZE;
+
+   if (blocks) {
+   while (blocks--) {
+   __sha1_transform(sctx->state, data);
+   data += SHA1_BLOCK_SIZE;
+   }
+   }
+   partial = 0;
+   }
+
+   if (len)
+   memcpy(sctx->buffer + partial, data, len);
+}
+
+static void sha1_final(struct sha1_state *sctx, u8 *out)
+{
+   const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+   __be64 *bits = (__be64 *)(sctx->buffer + bit_offset);
+   __be32 *digest = (__be32 *)out;
+   int i;
+
+   sctx->buffer[partial++] = 0x80;
+   if (partial > bit_offset) {
+   memset(sctx->buffer + partial, 0x0, SHA1_BLOCK_SIZE - partial);
+   partial = 0;
+
+   __sha1_transform(sctx->state, sctx->buffer);
+   }
+
+   memset(sctx->buffer + partial, 0x0, bit_offset - partial);
+   *bits = cpu_to_be64(sctx->count << 3);
+   __sha1_transform(sctx->state, sctx->buffer);
+
+   for (i = 0; i < SHA1_DIGEST_SIZE / sizeof(__be32); i++)
+   put_unaligned_be32(sctx-&g

[PATCH v9 01/19] x86/boot: Place kernel_info at a fixed offset

2024-05-30 Thread Ross Philipson
From: Arvind Sankar 

There are use cases for storing the offset of a symbol in kernel_info.
For example, the trenchboot series [0] needs to store the offset of the
Measured Launch Environment header in kernel_info.

Since commit (note: commit ID from tip/master)

commit 527afc212231 ("x86/boot: Check that there are no run-time relocations")

run-time relocations are not allowed in the compressed kernel, so simply
using the symbol in kernel_info, as

.long   symbol

will cause a linker error because this is not position-independent.

With kernel_info being a separate object file and in a different section
from startup_32, there is no way to calculate the offset of a symbol
from the start of the image in a position-independent way.

To enable such use cases, put kernel_info into its own section which is
placed at a predetermined offset (KERNEL_INFO_OFFSET) via the linker
script. This will allow calculating the symbol offset in a
position-independent way, by adding the offset from the start of
kernel_info to KERNEL_INFO_OFFSET.

Ensure that kernel_info is aligned, and use the SYM_DATA.* macros
instead of bare labels. This stores the size of the kernel_info
structure in the ELF symbol table.

Signed-off-by: Arvind Sankar 
Cc: Ross Philipson 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/kernel_info.S | 19 +++
 arch/x86/boot/compressed/kernel_info.h | 12 
 arch/x86/boot/compressed/vmlinux.lds.S |  6 ++
 3 files changed, 33 insertions(+), 4 deletions(-)
 create mode 100644 arch/x86/boot/compressed/kernel_info.h

diff --git a/arch/x86/boot/compressed/kernel_info.S 
b/arch/x86/boot/compressed/kernel_info.S
index f818ee8fba38..c18f07181dd5 100644
--- a/arch/x86/boot/compressed/kernel_info.S
+++ b/arch/x86/boot/compressed/kernel_info.S
@@ -1,12 +1,23 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 
+#include 
 #include 
+#include "kernel_info.h"
 
-   .section ".rodata.kernel_info", "a"
+/*
+ * If a field needs to hold the offset of a symbol from the start
+ * of the image, use the macro below, eg
+ * .long   rva(symbol)
+ * This will avoid creating run-time relocations, which are not
+ * allowed in the compressed kernel.
+ */
+
+#define rva(X) (((X) - kernel_info) + KERNEL_INFO_OFFSET)
 
-   .global kernel_info
+   .section ".rodata.kernel_info", "a"
 
-kernel_info:
+   .balign 16
+SYM_DATA_START(kernel_info)
/* Header, Linux top (structure). */
.ascii  "LToP"
/* Size. */
@@ -19,4 +30,4 @@ kernel_info:
 
 kernel_info_var_len_data:
/* Empty for time being... */
-kernel_info_end:
+SYM_DATA_END_LABEL(kernel_info, SYM_L_LOCAL, kernel_info_end)
diff --git a/arch/x86/boot/compressed/kernel_info.h 
b/arch/x86/boot/compressed/kernel_info.h
new file mode 100644
index ..c127f84aec63
--- /dev/null
+++ b/arch/x86/boot/compressed/kernel_info.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+#ifndef BOOT_COMPRESSED_KERNEL_INFO_H
+#define BOOT_COMPRESSED_KERNEL_INFO_H
+
+#ifdef CONFIG_X86_64
+#define KERNEL_INFO_OFFSET 0x500
+#else /* 32-bit */
+#define KERNEL_INFO_OFFSET 0x100
+#endif
+
+#endif /* BOOT_COMPRESSED_KERNEL_INFO_H */
diff --git a/arch/x86/boot/compressed/vmlinux.lds.S 
b/arch/x86/boot/compressed/vmlinux.lds.S
index 083ec6d7722a..718c52f3f1e6 100644
--- a/arch/x86/boot/compressed/vmlinux.lds.S
+++ b/arch/x86/boot/compressed/vmlinux.lds.S
@@ -7,6 +7,7 @@ OUTPUT_FORMAT(CONFIG_OUTPUT_FORMAT)
 
 #include 
 #include 
+#include "kernel_info.h"
 
 #ifdef CONFIG_X86_64
 OUTPUT_ARCH(i386:x86-64)
@@ -27,6 +28,11 @@ SECTIONS
HEAD_TEXT
_ehead = . ;
}
+   .rodata.kernel_info KERNEL_INFO_OFFSET : {
+   *(.rodata.kernel_info)
+   }
+   ASSERT(ABSOLUTE(kernel_info) == KERNEL_INFO_OFFSET, "kernel_info at bad 
address!")
+
.rodata..compressed : {
*(.rodata..compressed)
}
-- 
2.39.3




[PATCH v9 00/19] x86: Trenchboot secure dynamic launch Linux kernel support

2024-05-30 Thread Ross Philipson
The larger focus of the TrenchBoot project (https://github.com/TrenchBoot) is to
enhance the boot security and integrity in a unified manner. The first area of
focus has been on the Trusted Computing Group's Dynamic Launch for establishing
a hardware Root of Trust for Measurement, also know as DRTM (Dynamic Root of
Trust for Measurement). The project has been and continues to work on providing
a unified means to Dynamic Launch that is a cross-platform (Intel and AMD) and
cross-architecture (x86 and Arm), with our recent involvment in the upcoming
Arm DRTM specification. The order of introducing DRTM to the Linux kernel
follows the maturity of DRTM in the architectures. Intel's Trusted eXecution
Technology (TXT) is present today and only requires a preamble loader, e.g. a
boot loader, and an OS kernel that is TXT-aware. AMD DRTM implementation has
been present since the introduction of AMD-V but requires an additional
component that is AMD specific and referred to in the specification as the
Secure Loader, which the TrenchBoot project has an active prototype in
development. Finally Arm's implementation is in specification development stage
and the project is looking to support it when it becomes available.

This patchset provides detailed documentation of DRTM, the approach used for
adding the capbility, and relevant API/ABI documentation. In addition to the
documentation the patch set introduces Intel TXT support as the first platform
for Linux Secure Launch.

A quick note on terminology. The larger open source project itself is called
TrenchBoot, which is hosted on Github (links below). The kernel feature enabling
the use of Dynamic Launch technology is referred to as "Secure Launch" within
the kernel code. As such the prefixes sl_/SL_ or slaunch/SLAUNCH will be seen
in the code. The stub code discussed above is referred to as the SL stub.

The Secure Launch feature starts with patch #2. Patch #1 was authored by Arvind
Sankar. There is no further status on this patch at this point but
Secure Launch depends on it so it is included with the set.

Links:

The TrenchBoot project including documentation:

https://trenchboot.org

The TrenchBoot project on Github:

https://github.com/trenchboot

Intel TXT is documented in its own specification and in the SDM Instruction Set 
volume:

https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
https://software.intel.com/en-us/articles/intel-sdm

AMD SKINIT is documented in the System Programming manual:

https://www.amd.com/system/files/TechDocs/24593.pdf

The TrenchBoot project provides a quick start guide to help get a system
up and running with Secure Launch for Linux:

https://github.com/TrenchBoot/documentation/blob/master/QUICKSTART.md

Patch set based on commit:

torvalds/master/ea5f6ad9ad9645733b72ab53a98e719b460d36a6

Thanks
Ross Philipson and Daniel P. Smith

Changes in v2:

 - Modified 32b entry code to prevent causing relocations in the compressed
   kernel.
 - Dropped patches for compressed kernel TPM PCR extender.
 - Modified event log code to insert log delimiter events and not rely
   on TPM access.
 - Stop extending PCRs in the early Secure Launch stub code.
 - Removed Kconfig options for hash algorithms and use the algorithms the
   ACM used.
 - Match Secure Launch measurement algorithm use to those reported in the
   TPM 2.0 event log.
 - Read the TPM events out of the TPM and extend them into the PCRs using
   the mainline TPM driver. This is done in the late initcall module.
 - Allow use of alternate PCR 19 and 20 for post ACM measurements.
 - Add Kconfig constraints needed by Secure Launch (disable KASLR
   and add x2apic dependency).
 - Fix testing of SL_FLAGS when determining if Secure Launch is active
   and the architecture is TXT.
 - Use SYM_DATA_START_LOCAL macros in early entry point code.
 - Security audit changes:
   - Validate buffers passed to MLE do not overlap the MLE and are
 properly laid out.
   - Validate buffers and memory regions used by the MLE are
 protected by IOMMU PMRs.
 - Force IOMMU to not use passthrough mode during a Secure Launch.
 - Prevent KASLR use during a Secure Launch.

Changes in v3:

 - Introduce x86 documentation patch to provide background, overview
   and configuration/ABI information for the Secure Launch kernel
   feature.
 - Remove the IOMMU patch with special cases for disabling IOMMU
   passthrough. Configuring the IOMMU is now a documentation matter
   in the previously mentioned new patch.
 - Remove special case KASLR disabling code. Configuring KASLR is now
   a documentation matter in the previously mentioned new patch.
 - Fix incorrect panic on TXT public register read.
 - Properly handle and measure setup_indirect bootparams in the early
   launch code.
 - Use correct compressed kernel image base address when testing buffers
   in the early launch stub code. This bug was introduced by the changes
   t

[PATCH v9 04/19] x86: Secure Launch Resource Table header file

2024-05-30 Thread Ross Philipson
Introduce the Secure Launch Resource Table which forms the formal
interface between the pre and post launch code.

Signed-off-by: Ross Philipson 
---
 include/linux/slr_table.h | 271 ++
 1 file changed, 271 insertions(+)
 create mode 100644 include/linux/slr_table.h

diff --git a/include/linux/slr_table.h b/include/linux/slr_table.h
new file mode 100644
index ..213d8ac16f0f
--- /dev/null
+++ b/include/linux/slr_table.h
@@ -0,0 +1,271 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Secure Launch Resource Table
+ *
+ * Copyright (c) 2024, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLR_TABLE_H
+#define _LINUX_SLR_TABLE_H
+
+/* Put this in efi.h if it becomes a standard */
+#define SLR_TABLE_GUID EFI_GUID(0x877a9b2a, 0x0385, 
0x45d1, 0xa0, 0x34, 0x9d, 0xac, 0x9c, 0x9e, 0x56, 0x5f)
+
+/* SLR table header values */
+#define SLR_TABLE_MAGIC0x4452544d
+#define SLR_TABLE_REVISION 1
+
+/* Current revisions for the policy and UEFI config */
+#define SLR_POLICY_REVISION1
+#define SLR_UEFI_CONFIG_REVISION   1
+
+/* SLR defined architectures */
+#define SLR_INTEL_TXT  1
+#define SLR_AMD_SKINIT 2
+
+/* SLR defined bootloaders */
+#define SLR_BOOTLOADER_INVALID 0
+#define SLR_BOOTLOADER_GRUB1
+
+/* Log formats */
+#define SLR_DRTM_TPM12_LOG 1
+#define SLR_DRTM_TPM20_LOG 2
+
+/* DRTM Policy Entry Flags */
+#define SLR_POLICY_FLAG_MEASURED   0x1
+#define SLR_POLICY_IMPLICIT_SIZE   0x2
+
+/* Array Lengths */
+#define TPM_EVENT_INFO_LENGTH  32
+#define TXT_VARIABLE_MTRRS_LENGTH  32
+
+/* Tags */
+#define SLR_ENTRY_INVALID  0x
+#define SLR_ENTRY_DL_INFO  0x0001
+#define SLR_ENTRY_LOG_INFO 0x0002
+#define SLR_ENTRY_ENTRY_POLICY 0x0003
+#define SLR_ENTRY_INTEL_INFO   0x0004
+#define SLR_ENTRY_AMD_INFO 0x0005
+#define SLR_ENTRY_ARM_INFO 0x0006
+#define SLR_ENTRY_UEFI_INFO0x0007
+#define SLR_ENTRY_UEFI_CONFIG  0x0008
+#define SLR_ENTRY_END  0x
+
+/* Entity Types */
+#define SLR_ET_UNSPECIFIED 0x
+#define SLR_ET_SLRT0x0001
+#define SLR_ET_BOOT_PARAMS 0x0002
+#define SLR_ET_SETUP_DATA  0x0003
+#define SLR_ET_CMDLINE 0x0004
+#define SLR_ET_UEFI_MEMMAP 0x0005
+#define SLR_ET_RAMDISK 0x0006
+#define SLR_ET_TXT_OS2MLE  0x0010
+#define SLR_ET_UNUSED  0x
+
+#ifndef __ASSEMBLY__
+
+/*
+ * Primary Secure Launch Resource Table Header
+ */
+struct slr_table {
+   u32 magic;
+   u16 revision;
+   u16 architecture;
+   u32 size;
+   u32 max_size;
+   /* table entries */
+} __packed;
+
+/*
+ * Common SLRT Table Header
+ */
+struct slr_entry_hdr {
+   u16 tag;
+   u16 size;
+} __packed;
+
+/*
+ * Boot loader context
+ */
+struct slr_bl_context {
+   u16 bootloader;
+   u16 reserved[3];
+   u64 context;
+} __packed;
+
+/*
+ * Dynamic Launch Callback Function type
+ */
+typedef void (*dl_handler_func)(struct slr_bl_context *bl_context);
+
+/*
+ * DRTM Dynamic Launch Configuration
+ */
+struct slr_entry_dl_info {
+   struct slr_entry_hdr hdr;
+   u32 dce_size;
+   u64 dce_base;
+   u64 dlme_size;
+   u64 dlme_base;
+   u64 dlme_entry;
+   struct slr_bl_context bl_context;
+   u64 dl_handler;
+} __packed;
+
+/*
+ * TPM Log Information
+ */
+struct slr_entry_log_info {
+   struct slr_entry_hdr hdr;
+   u16 format;
+   u16 reserved[3];
+   u32 size;
+   u64 addr;
+} __packed;
+
+/*
+ * DRTM Measurement Entry
+ */
+struct slr_policy_entry {
+   u16 pcr;
+   u16 entity_type;
+   u16 flags;
+   u16 reserved;
+   u64 size;
+   u64 entity;
+   char evt_info[TPM_EVENT_INFO_LENGTH];
+} __packed;
+
+/*
+ * DRTM Measurement Policy
+ */
+struct slr_entry_policy {
+   struct slr_entry_hdr hdr;
+   u16 revision;
+   u16 nr_entries;
+   struct slr_policy_entry policy_entries[];
+} __packed;
+
+/*
+ * Secure Launch defined MTRR saving structures
+ */
+struct slr_txt_mtrr_pair {
+   u64 mtrr_physbase;
+   u64 mtrr_physmask;
+} __packed;
+
+struct slr_txt_mtrr_state {
+   u64 default_mem_type;
+   u64 mtrr_vcnt;
+   struct slr_txt_mtrr_pair mtrr_pair[TXT_VARIABLE_MTRRS_LENGTH];
+} __packed;
+
+/*
+ * Intel TXT Info table
+ */
+struct slr_entry_intel_info {
+   struct slr_entry_hdr hdr;
+   u16 reserved[2];
+   u64 txt_heap;
+   u64 saved_misc_enable_msr;
+   struct slr_txt_mtrr_state saved_bsp_mtrrs;
+} __packed;
+
+/*
+ * UEFI config measurement entry
+ */
+struct slr_uefi_cfg_entry {
+   u16 pcr;
+   u16 reserved;
+   u32 size;
+   u64 cfg; /* address or value */
+   char evt_info[TPM_EVENT_INFO_LENGTH];
+} __packed;
+
+/*
+ * UEFI config measurements
+ */
+struct slr_entry_uefi_config {
+   struct slr_entry_hdr hdr;
+   u16 revision;
+   u16 nr_entries;
+   struct slr_uefi_cfg_entry

[PATCH v9 03/19] x86: Secure Launch Kconfig

2024-05-30 Thread Ross Philipson
Initial bits to bring in Secure Launch functionality. Add Kconfig
options for compiling in/out the Secure Launch code.

Signed-off-by: Ross Philipson 
---
 arch/x86/Kconfig | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index bc47bc9841ff..ee8e0cbc9a3e 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2067,6 +2067,17 @@ config EFI_RUNTIME_MAP
 
  See also Documentation/ABI/testing/sysfs-firmware-efi-runtime-map.
 
+config SECURE_LAUNCH
+   bool "Secure Launch support"
+   depends on X86_64 && X86_X2APIC && TCG_TPM && CRYPTO_LIB_SHA1 && 
CRYPTO_LIB_SHA256
+   help
+  The Secure Launch feature allows a kernel to be loaded
+  directly through an Intel TXT measured launch. Intel TXT
+  establishes a Dynamic Root of Trust for Measurement (DRTM)
+  where the CPU measures the kernel image. This feature then
+  continues the measurement chain over kernel configuration
+  information and init images.
+
 source "kernel/Kconfig.hz"
 
 config ARCH_SUPPORTS_KEXEC
-- 
2.39.3




[PATCH v9 19/19] x86: EFI stub DRTM launch support for Secure Launch

2024-05-30 Thread Ross Philipson
This support allows the DRTM launch to be initiated after an EFI stub
launch of the Linux kernel is done. This is accomplished by providing
a handler to jump to when a Secure Launch is in progress. This has to be
called after the EFI stub does Exit Boot Services.

Signed-off-by: Ross Philipson 
---
 drivers/firmware/efi/libstub/x86-stub.c | 98 +
 1 file changed, 98 insertions(+)

diff --git a/drivers/firmware/efi/libstub/x86-stub.c 
b/drivers/firmware/efi/libstub/x86-stub.c
index d5a8182cf2e1..a1143d006202 100644
--- a/drivers/firmware/efi/libstub/x86-stub.c
+++ b/drivers/firmware/efi/libstub/x86-stub.c
@@ -9,6 +9,8 @@
 #include 
 #include 
 #include 
+#include 
+#include 
 
 #include 
 #include 
@@ -830,6 +832,97 @@ static efi_status_t efi_decompress_kernel(unsigned long 
*kernel_entry)
return efi_adjust_memory_range_protection(addr, kernel_text_size);
 }
 
+#if (IS_ENABLED(CONFIG_SECURE_LAUNCH))
+static bool efi_secure_launch_update_boot_params(struct slr_table *slrt,
+struct boot_params 
*boot_params)
+{
+   struct slr_entry_intel_info *txt_info;
+   struct slr_entry_policy *policy;
+   struct txt_os_mle_data *os_mle;
+   bool updated = false;
+   int i;
+
+   txt_info = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_INTEL_INFO);
+   if (!txt_info)
+   return false;
+
+   os_mle = txt_os_mle_data_start((void *)txt_info->txt_heap);
+   if (!os_mle)
+   return false;
+
+   os_mle->boot_params_addr = (u32)(u64)boot_params;
+
+   policy = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_ENTRY_POLICY);
+   if (!policy)
+   return false;
+
+   for (i = 0; i < policy->nr_entries; i++) {
+   if (policy->policy_entries[i].entity_type == 
SLR_ET_BOOT_PARAMS) {
+   policy->policy_entries[i].entity = (u64)boot_params;
+   updated = true;
+   break;
+   }
+   }
+
+   /*
+* If this is a PE entry into EFI stub the mocked up boot params will
+* be missing some of the setup header data needed for the second stage
+* of the Secure Launch boot.
+*/
+   if (image) {
+   struct setup_header *hdr = (struct setup_header *)((u8 
*)image->image_base + 0x1f1);
+   u64 cmdline_ptr, hi_val;
+
+   boot_params->hdr.setup_sects = hdr->setup_sects;
+   boot_params->hdr.syssize = hdr->syssize;
+   boot_params->hdr.version = hdr->version;
+   boot_params->hdr.loadflags = hdr->loadflags;
+   boot_params->hdr.kernel_alignment = hdr->kernel_alignment;
+   boot_params->hdr.min_alignment = hdr->min_alignment;
+   boot_params->hdr.xloadflags = hdr->xloadflags;
+   boot_params->hdr.init_size = hdr->init_size;
+   boot_params->hdr.kernel_info_offset = hdr->kernel_info_offset;
+   hi_val = boot_params->ext_cmd_line_ptr;
+   cmdline_ptr = boot_params->hdr.cmd_line_ptr | hi_val << 32;
+   boot_params->hdr.cmdline_size = strlen((const char 
*)cmdline_ptr);;
+   }
+
+   return updated;
+}
+
+static void efi_secure_launch(struct boot_params *boot_params)
+{
+   struct slr_entry_dl_info *dlinfo;
+   efi_guid_t guid = SLR_TABLE_GUID;
+   dl_handler_func handler_callback;
+   struct slr_table *slrt;
+
+   /*
+* The presence of this table indicated a Secure Launch
+* is being requested.
+*/
+   slrt = (struct slr_table *)get_efi_config_table(guid);
+   if (!slrt || slrt->magic != SLR_TABLE_MAGIC)
+   return;
+
+   /*
+* Since the EFI stub library creates its own boot_params on entry, the
+* SLRT and TXT heap have to be updated with this version.
+*/
+   if (!efi_secure_launch_update_boot_params(slrt, boot_params))
+   return;
+
+   /* Jump through DL stub to initiate Secure Launch */
+   dlinfo = slr_next_entry_by_tag(slrt, NULL, SLR_ENTRY_DL_INFO);
+
+   handler_callback = (dl_handler_func)dlinfo->dl_handler;
+
+   handler_callback(&dlinfo->bl_context);
+
+   unreachable();
+}
+#endif
+
 static void __noreturn enter_kernel(unsigned long kernel_addr,
struct boot_params *boot_params)
 {
@@ -957,6 +1050,11 @@ void __noreturn efi_stub_entry(efi_handle_t handle,
goto fail;
}
 
+#if (IS_ENABLED(CONFIG_SECURE_LAUNCH))
+   /* If a Secure Launch is in progress, this never returns */
+   efi_secure_launch(boot_params);
+#endif
+
/*
 * Call the SEV init code while still running with the firmware's
 * GDT/IDT, so #VC exceptions will be handled by EFI.
-- 
2.39.3




Re: [PATCH v7 02/13] Documentation/x86: Secure Launch kernel documentation

2023-11-16 Thread ross . philipson

On 11/12/23 10:07 AM, Alyssa Ross wrote:

+Load-time Integrity
+---
+
+It is critical to understand what load-time integrity establishes about a
+system and what is assumed, i.e. what is being trusted. Load-time integrity is
+when a trusted entity, i.e. an entity with an assumed integrity, takes an
+action to assess an entity being loaded into memory before it is used. A
+variety of mechanisms may be used to conduct the assessment, each with
+different properties. A particular property is whether the mechanism creates an
+evidence of the assessment. Often either cryptographic signature checking or
+hashing are the common assessment operations used.
+
+A signature checking assessment functions by requiring a representation of the
+accepted authorities and uses those representations to assess if the entity has
+been signed by an accepted authority. The benefit to this process is that
+assessment process includes an adjudication of the assessment. The drawbacks
+are that 1) the adjudication is susceptible to tampering by the Trusted
+Computing Base (TCB), 2) there is no evidence to assert that an untampered
+adjudication was completed, and 3) the system must be an active participant in
+the key management infrastructure.
+
+A cryptographic hashing assessment does not adjudicate the assessment but
+instead, generates evidence of the assessment to be adjudicated independently.
+The benefits to this approach is that the assessment may be simple such that it
+may be implemented in an immutable mechanism, e.g. in hardware.  Additionally,
+it is possible for the adjudication to be conducted where it cannot be tampered
+with by the TCB. The drawback is that a compromised environment will be allowed
+to execute until an adjudication can be completed.
+
+Ultimately, load-time integrity provides confidence that the correct entity was
+loaded and in the absence of a run-time integrity mechanism assumes, i.e.
+trusts, that the entity will never become corrupted.


I'm somewhat familiar with this area, but not massively (so probably the
sort of person this documentation is aimed at!), and this was the only
section of the documentation I had trouble understanding.

The thing that confused me was that the first time I read this, I was
thinking that a hashing assessment would be comparing the generated hash
to a baked-in known good hash, simliar to how e.g. a verity root hash
might be specified on the kernel command line, baked in to the OS image.
This made me wonder why it wasn't considered to be adjudicated during
assessment.  Upon reading it a second time, I now understand that what
it's actually talking about is generating a hash, but not comparing it
automatically against anything, and making it available for external
adjudication somehow.


Yes there is nothing baked into an image in the way we currently use is. 
I take what you call a hashing assessment to be what we would call 
remote attestation where an independent agent assesses the state of the 
measured launch. This is indeed one of the primary use cases. There is 
another use case closer to the baked in one where secrets on the system 
are sealed to the TPM using a known good PCR configuration. Only by 
launching and attaining that known good state can the secrets be unsealed.




I don't know if the approach I first thought of is used in early boot
at all, but it might be worth contrasting the cryptographic hashing
assessment described here with it, because I imagine that I'm not going
to be the only reader who's more used to thinking about integrity
slightly later in the boot process where adjudicating based on a static
hash is common, and who's mind is going to go to that when they read
about a "cryptographic hashing assessment".

The rest of the documentation was easy to understand and very helpful to
understanding system launch integrity.  Thanks!


I am glad it was helpful. We will revisit the section that caused 
confusion and see if we can make it clearer.


Thank you,
Ross



Re: [PATCH v7 10/13] kexec: Secure Launch kexec SEXIT support

2023-11-15 Thread ross . philipson

On 11/10/23 3:41 PM, Sean Christopherson wrote:

On Fri, Nov 10, 2023, Ross Philipson wrote:

Prior to running the next kernel via kexec, the Secure Launch code
closes down private SMX resources and does an SEXIT. This allows the
next kernel to start normally without any issues starting the APs etc.

Signed-off-by: Ross Philipson 
---
  arch/x86/kernel/slaunch.c | 73 +++
  kernel/kexec_core.c   |  4 +++
  2 files changed, 77 insertions(+)

diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index cd5aa34e395c..32b0c24a6484 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -523,3 +523,76 @@ void __init slaunch_setup_txt(void)
  
  	pr_info("Intel TXT setup complete\n");

  }
+
+static inline void smx_getsec_sexit(void)
+{
+   asm volatile (".byte 0x0f,0x37\n"
+ : : "a" (SMX_X86_GETSEC_SEXIT));


SMX has been around for what, two decades?  Is open coding getsec actually 
necessary?


There were some older gcc compilers that still did not like the getsec 
mnemonic. Perhaps they are old enough now where they don't matter any 
longer. I will check on that...





+   /* Disable SMX mode */


Heh, the code and the comment don't really agree.  I'm guessing the intent of 
the
comment is referring to leaving the measured environment, but it looks odd.   If
manually setting SMXE is necessary, I'd just delete this comment, or maybe move
it to above SEXIT.


I will look it over and see what makes sense.




+   cr4_set_bits(X86_CR4_SMXE);


Is it actually legal to clear CR4.SMXE while post-SENTER?  I don't see anything
in the SDM that says it's illegal, but allowing software to clear SMXE in that
case seems all kinds of odd.


I am pretty sure I coded this up using the pseudo code in the TXT dev 
guide and some guidance from Intel/former Intel folks. I will revisit it 
to make sure it is correct.


Thanks
Ross




+
+   /* Do the SEXIT SMX operation */
+   smx_getsec_sexit();
+
+   pr_info("TXT SEXIT complete.\n");
+}





[PATCH v7 02/13] Documentation/x86: Secure Launch kernel documentation

2023-11-10 Thread Ross Philipson
Introduce background, overview and configuration/ABI information
for the Secure Launch kernel feature.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
Reviewed-by: Bagas Sanjaya 
---
 Documentation/security/index.rst  |   1 +
 .../security/launch-integrity/index.rst   |  11 +
 .../security/launch-integrity/principles.rst  | 320 ++
 .../secure_launch_details.rst | 584 ++
 .../secure_launch_overview.rst| 226 +++
 5 files changed, 1142 insertions(+)
 create mode 100644 Documentation/security/launch-integrity/index.rst
 create mode 100644 Documentation/security/launch-integrity/principles.rst
 create mode 100644 
Documentation/security/launch-integrity/secure_launch_details.rst
 create mode 100644 
Documentation/security/launch-integrity/secure_launch_overview.rst

diff --git a/Documentation/security/index.rst b/Documentation/security/index.rst
index 59f8fc106cb0..56e31fb3d91f 100644
--- a/Documentation/security/index.rst
+++ b/Documentation/security/index.rst
@@ -19,3 +19,4 @@ Security Documentation
digsig
landlock
secrets/index
+   launch-integrity/index
diff --git a/Documentation/security/launch-integrity/index.rst 
b/Documentation/security/launch-integrity/index.rst
new file mode 100644
index ..838328186dd2
--- /dev/null
+++ b/Documentation/security/launch-integrity/index.rst
@@ -0,0 +1,11 @@
+=
+System Launch Integrity documentation
+=
+
+.. toctree::
+   :maxdepth: 1
+
+   principles
+   secure_launch_overview
+   secure_launch_details
+
diff --git a/Documentation/security/launch-integrity/principles.rst 
b/Documentation/security/launch-integrity/principles.rst
new file mode 100644
index ..68a415aec545
--- /dev/null
+++ b/Documentation/security/launch-integrity/principles.rst
@@ -0,0 +1,320 @@
+.. SPDX-License-Identifier: GPL-2.0
+.. Copyright © 2019-2023 Daniel P. Smith 
+
+===
+System Launch Integrity
+===
+
+:Author: Daniel P. Smith
+:Date: October 2023
+
+This document serves to establish a common understanding of what is system
+launch, the integrity concern for system launch, and why using a Root of Trust
+(RoT) from a Dynamic Launch may be desired. Throughout this document
+terminology from the Trusted Computing Group (TCG) and National Institute for
+Science and Technology (NIST) is used to ensure a vendor natural language is
+used to describe and reference security-related concepts.
+
+System Launch
+=
+
+There is a tendency to only consider the classical power-on boot as the only
+means to launch an Operating System (OS) on a computer system, but in fact most
+modern processors support two methods to launch the system. To provide clarity
+a common definition of a system launch should be established. This definition
+is that a during a single power life cycle of a system, a System Launch
+consists of an initialization event, typically in hardware, that is followed by
+an executing software payload that takes the system from the initialized state
+to a running state. Driven by the Trusted Computing Group (TCG) architecture,
+modern processors are able to support two methods to launch a system, these two
+types of system launch are known as Static Launch and Dynamic Launch.
+
+Static Launch
+-
+
+Static launch is the system launch associated with the power cycle of the CPU.
+Thus, static launch refers to the classical power-on boot where the
+initialization event is the release of the CPU from reset and the system
+firmware is the software payload that brings the system up to a running state.
+Since static launch is the system launch associated with the beginning of the
+power lifecycle of a system, it is therefore a fixed, one-time system launch.
+It is because of this that static launch is referred to and thought of as being
+"static".
+
+Dynamic Launch
+--
+
+Modern CPUs architectures provides a mechanism to re-initialize the system to a
+"known good" state without requiring a power event. This re-initialization
+event is the event for a dynamic launch and is referred to as the Dynamic
+Launch Event (DLE). The DLE functions by accepting a software payload, referred
+to as the Dynamic Configuration Environment (DCE), that execution is handed to
+after the DLE is invoked. The DCE is responsible for bringing the system back
+to a running state. Since the dynamic launch is not tied to a power event like
+the static launch, this enables a dynamic launch to be initiated at any time
+and multiple times during a single power life cycle. This dynamism is the
+reasoning behind referring to this system launch as being dynamic.
+
+Because a dynamic launch can be conducted at any time during a single power
+life cycle, they are classified into one of two types, an early launch or a
+late launch.
+
+:Early Launch: W

[PATCH v7 07/13] x86: Secure Launch kernel early boot stub

2023-11-10 Thread Ross Philipson
The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 
---
 Documentation/arch/x86/boot.rst|  21 +
 arch/x86/boot/compressed/Makefile  |   3 +-
 arch/x86/boot/compressed/head_64.S |  34 ++
 arch/x86/boot/compressed/kernel_info.S |  34 ++
 arch/x86/boot/compressed/sl_main.c | 582 
 arch/x86/boot/compressed/sl_stub.S | 705 +
 arch/x86/include/asm/msr-index.h   |   5 +
 arch/x86/include/uapi/asm/bootparam.h  |   1 +
 arch/x86/kernel/asm-offsets.c  |  20 +
 9 files changed, 1404 insertions(+), 1 deletion(-)
 create mode 100644 arch/x86/boot/compressed/sl_main.c
 create mode 100644 arch/x86/boot/compressed/sl_stub.S

diff --git a/Documentation/arch/x86/boot.rst b/Documentation/arch/x86/boot.rst
index f5d2f2414de8..03a2c5302a89 100644
--- a/Documentation/arch/x86/boot.rst
+++ b/Documentation/arch/x86/boot.rst
@@ -482,6 +482,14 @@ Protocol:  2.00+
- If 1, KASLR enabled.
- If 0, KASLR disabled.
 
+  Bit 2 (kernel internal): SLAUNCH_FLAG
+
+   - Used internally by the compressed kernel to communicate
+ Secure Launch status to kernel proper.
+
+   - If 1, Secure Launch enabled.
+   - If 0, Secure Launch disabled.
+
   Bit 5 (write): QUIET_FLAG
 
- If 0, print early messages.
@@ -1027,6 +1035,19 @@ Offset/size: 0x000c/4
 
   This field contains maximal allowed type for setup_data and setup_indirect 
structs.
 
+   =
+Field name:mle_header_offset
+Offset/size:   0x0010/4
+   =
+
+  This field contains the offset to the Secure Launch Measured Launch 
Environment
+  (MLE) header. This offset is used to locate information needed during a 
secure
+  late launch using Intel TXT. If the offset is zero, the kernel does not have
+  Secure Launch capabilities. The MLE entry point is called from TXT on the BSP
+  following a success measured launch. The specific state of the processors is
+  outlined in the TXT Software Development Guide, the latest can be found here:
+  
https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
+
 
 The Image Checksum
 ==
diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 07a2f56cd571..3186d303ec8b 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,7 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
-vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o \
+   $(obj)/sl_main.o $(obj)/sl_stub.o
 
 $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
$(call if_changed,ld)
diff --git a/arch/x86/boot/compressed/head_64.S 
b/arch/x86/boot/compressed/head_64.S
index bf4a10a5794f..6fa5bb87195b 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -415,6 +415,17 @@ SYM_CODE_START(startup_64)
pushq   $0
popfq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   pushq   %rsi
+
+   /* Ensure the relocation region coverd by a PMR */
+   movq%rbx, %rdi
+   movl$(_bss - startup_32), %esi
+   callq   sl_check_region
+
+   popq%rsi
+#endif
+
 /*
  * Copy the compressed kernel to the end of our buffer
  * where decompression in place becomes safe.
@@ -457,6 +468,29 @@ SYM_FUNC_START_LOCAL_NOALIGN(.Lrelocated)
shrq$3, %rcx
rep stosq
 
+#ifdef CONFIG_SECURE_LAUNCH
+   /*
+* Have to do the final early sl stub work in 64b area.
+*
+* *** NOTE ***
+*
+* Several boot params get used before we get a chance to measure
+* them in this call. This is a known issue and we currently don't
+* have a solution. The scratch field doesn't matter. There is no
+* obvious way to do anything about the use of kernel_al

[PATCH v7 08/13] x86: Secure Launch kernel late boot stub

2023-11-10 Thread Ross Philipson
The routine slaunch_setup is called out of the x86 specific setup_arch()
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.

For TXT, this code also reserves the original compressed kernel setup
area where the APs were left looping so that this memory cannot be used.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/setup.c|   3 +
 arch/x86/kernel/slaunch.c  | 525 +
 drivers/iommu/intel/dmar.c |   4 +
 4 files changed, 533 insertions(+)
 create mode 100644 arch/x86/kernel/slaunch.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 325ab98f..5848ea310175 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -74,6 +74,7 @@ obj-$(CONFIG_X86_32)  += tls.o
 obj-$(CONFIG_IA32_EMULATION)   += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-y  += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 1526747bedf2..0b885742c297 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -21,6 +21,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -937,6 +938,8 @@ void __init setup_arch(char **cmdline_p)
early_gart_iommu_check();
 #endif
 
+   slaunch_setup_txt();
+
/*
 * partially used pages are not usable - thus
 * we are rounding upwards:
diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
new file mode 100644
index ..cd5aa34e395c
--- /dev/null
+++ b/arch/x86/kernel/slaunch.c
@@ -0,0 +1,525 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup and finalization support.
+ *
+ * Copyright (c) 2022, Oracle and/or its affiliates.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+static u32 sl_flags __ro_after_init;
+static struct sl_ap_wake_info ap_wake_info __ro_after_init;
+static u64 evtlog_addr __ro_after_init;
+static u32 evtlog_size __ro_after_init;
+static u64 vtd_pmr_lo_size __ro_after_init;
+
+/* This should be plenty of room */
+static u8 txt_dmar[PAGE_SIZE] __aligned(16);
+
+/*
+ * Get the Secure Launch flags that indicate what kind of launch is being done.
+ * E.g. a TXT launch is in progress or no Secure Launch is happening.
+ */
+u32 slaunch_get_flags(void)
+{
+   return sl_flags;
+}
+
+/*
+ * Return the AP wakeup information used in the SMP boot code to start up
+ * the APs that are parked using MONITOR/MWAIT.
+ */
+struct sl_ap_wake_info *slaunch_get_ap_wake_info(void)
+{
+   return &ap_wake_info;
+}
+
+/*
+ * On Intel platforms, TXT passes a safe copy of the DMAR ACPI table to the
+ * DRTM. The DRTM is supposed to use this instead of the one found in the
+ * ACPI tables.
+ */
+struct acpi_table_header *slaunch_get_dmar_table(struct acpi_table_header 
*dmar)
+{
+   /* The DMAR is only stashed and provided via TXT on Intel systems */
+   if (memcmp(txt_dmar, "DMAR", 4))
+   return dmar;
+
+   return (struct acpi_table_header *)(txt_dmar);
+}
+
+/*
+ * If running within a TXT established DRTM, this is the proper way to reset
+ * the system if a failure occurs or a security issue is found.
+ */
+void __noreturn slaunch_txt_reset(void __iomem *txt,
+ const char *msg, u64 error)
+{
+   u64 one = 1, val;
+
+   pr_err("%s", msg);
+
+   /*
+* This performs a TXT reset with a sticky error code. The reads of
+* TXT_CR_E2STS act as barriers.
+*/
+   memcpy_toio(txt + TXT_CR_ERRORCODE, &error, sizeof(error));
+   memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_NO_SECRETS, &one, sizeof(one));
+   memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_UNLOCK_MEM_CONFIG, &one, sizeof(one));
+   memcpy_fromio(&val, txt + TXT_CR_E2STS, sizeof(val));
+   memcpy_toio(txt + TXT_CR_CMD_RESET, &one, sizeof(one));
+
+   for ( ; ; )
+   asm volatile ("hlt");
+
+   unreachable();
+}
+
+/*
+ * The TXT heap is too big to map all at once with early_ioremap
+ * so it is done a table at a time.
+ */
+static void __init *txt_early_get_heap_table(void __iomem *txt, u32 type,
+u32 bytes)
+{
+   u64 base, size, offset = 0;
+   void *heap;
+   int i;
+
+

[PATCH v7 12/13] x86: Secure Launch late initcall platform module

2023-11-10 Thread Ross Philipson
From: "Daniel P. Smith" 

The Secure Launch platform module is a late init module. During the
init call, the TPM event log is read and measurements taken in the
early boot stub code are located. These measurements are extended
into the TPM PCRs using the mainline TPM kernel driver.

The platform module also registers the securityfs nodes to allow
access to TXT register fields on Intel along with the fetching of
and writing events to the late launch TPM log.

Signed-off-by: Daniel P. Smith 
Signed-off-by: garnetgrimm 
Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/Makefile   |   1 +
 arch/x86/kernel/slmodule.c | 517 +
 2 files changed, 518 insertions(+)
 create mode 100644 arch/x86/kernel/slmodule.c

diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 5848ea310175..948346ff4595 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -75,6 +75,7 @@ obj-$(CONFIG_IA32_EMULATION)  += tls.o
 obj-y  += step.o
 obj-$(CONFIG_INTEL_TXT)+= tboot.o
 obj-$(CONFIG_SECURE_LAUNCH)+= slaunch.o
+obj-$(CONFIG_SECURE_LAUNCH)+= slmodule.o
 obj-$(CONFIG_ISA_DMA_API)  += i8237.o
 obj-y  += stacktrace.o
 obj-y  += cpu/
diff --git a/arch/x86/kernel/slmodule.c b/arch/x86/kernel/slmodule.c
new file mode 100644
index ..992469bf15a4
--- /dev/null
+++ b/arch/x86/kernel/slmodule.c
@@ -0,0 +1,517 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Secure Launch late validation/setup, securityfs exposure and finalization.
+ *
+ * Copyright (c) 2022 Apertus Solutions, LLC
+ * Copyright (c) 2021 Assured Information Security, Inc.
+ * Copyright (c) 2022, Oracle and/or its affiliates.
+ *
+ * Co-developed-by: Garnet T. Grimm 
+ * Signed-off-by: Garnet T. Grimm 
+ * Signed-off-by: Daniel P. Smith 
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+/*
+ * The macro DECLARE_TXT_PUB_READ_U is used to read values from the TXT
+ * public registers as unsigned values.
+ */
+#define DECLARE_TXT_PUB_READ_U(size, fmt, msg_size)\
+static ssize_t txt_pub_read_u##size(unsigned int offset,   \
+   loff_t *read_offset,\
+   size_t read_len,\
+   char __user *buf)   \
+{  \
+   char msg_buffer[msg_size];  \
+   u##size reg_value = 0;  \
+   void __iomem *txt;  \
+   \
+   txt = ioremap(TXT_PUB_CONFIG_REGS_BASE, \
+   TXT_NR_CONFIG_PAGES * PAGE_SIZE);   \
+   if (!txt)   \
+   return -EFAULT; \
+   memcpy_fromio(®_value, txt + offset, sizeof(u##size));   \
+   iounmap(txt);   \
+   snprintf(msg_buffer, msg_size, fmt, reg_value); \
+   return simple_read_from_buffer(buf, read_len, read_offset,  \
+   &msg_buffer, msg_size); \
+}
+
+DECLARE_TXT_PUB_READ_U(8, "%#04x\n", 6);
+DECLARE_TXT_PUB_READ_U(32, "%#010x\n", 12);
+DECLARE_TXT_PUB_READ_U(64, "%#018llx\n", 20);
+
+#define DECLARE_TXT_FOPS(reg_name, reg_offset, reg_size)   \
+static ssize_t txt_##reg_name##_read(struct file *flip,
\
+   char __user *buf, size_t read_len, loff_t *read_offset) \
+{  \
+   return txt_pub_read_u##reg_size(reg_offset, read_offset,\
+   read_len, buf); \
+}  \
+static const struct file_operations reg_name##_ops = { \
+   .read = txt_##reg_name##_read,  \
+}
+
+DECLARE_TXT_FOPS(sts, TXT_CR_STS, 64);
+DECLARE_TXT_FOPS(ests, TXT_CR_ESTS, 8);
+DECLARE_TXT_FOPS(errorcode, TXT_CR_ERRORCODE, 32);
+DECLARE_TXT_FOPS(didvid, TXT_CR_DIDVID, 64);
+DECLARE_TXT_FOPS(e2sts, TXT_CR_E2STS, 64);
+DECLARE_TXT_FOPS(ver_emif, TXT_CR_VER_EMIF, 32);
+DECLARE_TXT_FOPS(scratchpad, TXT_CR_SCRATCHPAD, 64);
+
+/*
+ * Securityfs exposure
+ */
+struct memfile {
+   char *name;
+   void *addr;
+   size_t size;
+};
+
+static struct memfile sl_evtlog = {"

[PATCH v7 05/13] x86: Secure Launch main header file

2023-11-10 Thread Ross Philipson
Introduce the main Secure Launch header file used in the early SL stub
and the early setup code.

Signed-off-by: Ross Philipson 
---
 include/linux/slaunch.h | 542 
 1 file changed, 542 insertions(+)
 create mode 100644 include/linux/slaunch.h

diff --git a/include/linux/slaunch.h b/include/linux/slaunch.h
new file mode 100644
index ..da2988e32ada
--- /dev/null
+++ b/include/linux/slaunch.h
@@ -0,0 +1,542 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Main Secure Launch header file.
+ *
+ * Copyright (c) 2022, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLAUNCH_H
+#define _LINUX_SLAUNCH_H
+
+/*
+ * Secure Launch Defined State Flags
+ */
+#define SL_FLAG_ACTIVE 0x0001
+#define SL_FLAG_ARCH_SKINIT0x0002
+#define SL_FLAG_ARCH_TXT   0x0004
+
+/*
+ * Secure Launch CPU Type
+ */
+#define SL_CPU_AMD 1
+#define SL_CPU_INTEL   2
+
+#if IS_ENABLED(CONFIG_SECURE_LAUNCH)
+
+#define __SL32_CS  0x0008
+#define __SL32_DS  0x0010
+
+/*
+ * Intel Safer Mode Extensions (SMX)
+ *
+ * Intel SMX provides a programming interface to establish a Measured Launched
+ * Environment (MLE). The measurement and protection mechanisms supported by 
the
+ * capabilities of an Intel Trusted Execution Technology (TXT) platform. SMX is
+ * the processor’s programming interface in an Intel TXT platform.
+ *
+ * See Intel SDM Volume 2 - 6.1 "Safer Mode Extensions Reference"
+ */
+
+/*
+ * SMX GETSEC Leaf Functions
+ */
+#define SMX_X86_GETSEC_SEXIT   5
+#define SMX_X86_GETSEC_SMCTRL  7
+#define SMX_X86_GETSEC_WAKEUP  8
+
+/*
+ * Intel Trusted Execution Technology MMIO Registers Banks
+ */
+#define TXT_PUB_CONFIG_REGS_BASE   0xfed3
+#define TXT_PRIV_CONFIG_REGS_BASE  0xfed2
+#define TXT_NR_CONFIG_PAGES ((TXT_PUB_CONFIG_REGS_BASE - \
+ TXT_PRIV_CONFIG_REGS_BASE) >> PAGE_SHIFT)
+
+/*
+ * Intel Trusted Execution Technology (TXT) Registers
+ */
+#define TXT_CR_STS 0x
+#define TXT_CR_ESTS0x0008
+#define TXT_CR_ERRORCODE   0x0030
+#define TXT_CR_CMD_RESET   0x0038
+#define TXT_CR_CMD_CLOSE_PRIVATE   0x0048
+#define TXT_CR_DIDVID  0x0110
+#define TXT_CR_VER_EMIF0x0200
+#define TXT_CR_CMD_UNLOCK_MEM_CONFIG   0x0218
+#define TXT_CR_SINIT_BASE  0x0270
+#define TXT_CR_SINIT_SIZE  0x0278
+#define TXT_CR_MLE_JOIN0x0290
+#define TXT_CR_HEAP_BASE   0x0300
+#define TXT_CR_HEAP_SIZE   0x0308
+#define TXT_CR_SCRATCHPAD  0x0378
+#define TXT_CR_CMD_OPEN_LOCALITY1  0x0380
+#define TXT_CR_CMD_CLOSE_LOCALITY1 0x0388
+#define TXT_CR_CMD_OPEN_LOCALITY2  0x0390
+#define TXT_CR_CMD_CLOSE_LOCALITY2 0x0398
+#define TXT_CR_CMD_SECRETS 0x08e0
+#define TXT_CR_CMD_NO_SECRETS  0x08e8
+#define TXT_CR_E2STS   0x08f0
+
+/* TXT default register value */
+#define TXT_REGVALUE_ONE   0x1ULL
+
+/* TXTCR_STS status bits */
+#define TXT_SENTER_DONE_STSBIT(0)
+#define TXT_SEXIT_DONE_STS BIT(1)
+
+/*
+ * SINIT/MLE Capabilities Field Bit Definitions
+ */
+#define TXT_SINIT_MLE_CAP_WAKE_GETSEC  0
+#define TXT_SINIT_MLE_CAP_WAKE_MONITOR 1
+
+/*
+ * OS/MLE Secure Launch Specific Definitions
+ */
+#define TXT_OS_MLE_STRUCT_VERSION  1
+#define TXT_OS_MLE_MAX_VARIABLE_MTRRS  32
+
+/*
+ * TXT Heap Table Enumeration
+ */
+#define TXT_BIOS_DATA_TABLE1
+#define TXT_OS_MLE_DATA_TABLE  2
+#define TXT_OS_SINIT_DATA_TABLE3
+#define TXT_SINIT_MLE_DATA_TABLE   4
+#define TXT_SINIT_TABLE_MAXTXT_SINIT_MLE_DATA_TABLE
+
+/*
+ * Secure Launch Defined Error Codes used in MLE-initiated TXT resets.
+ *
+ * TXT Specification
+ * Appendix I ACM Error Codes
+ */
+#define SL_ERROR_GENERIC   0xc0008001
+#define SL_ERROR_TPM_INIT  0xc0008002
+#define SL_ERROR_TPM_INVALID_LOG20 0xc0008003
+#define SL_ERROR_TPM_LOGGING_FAILED0xc0008004
+#define SL_ERROR_REGION_STRADDLE_4GB   0xc0008005
+#define SL_ERROR_TPM_EXTEND0xc0008006
+#define SL_ERROR_MTRR_INV_VCNT 0xc0008007
+#define SL_ERROR_MTRR_INV_DEF_TYPE 0xc0008008
+#define SL_ERROR_MTRR_INV_BASE 0xc0008009
+#define SL_ERROR_MTRR_INV_MASK 0xc000800a
+#define SL_ERROR_MSR_INV_MISC_EN   0xc000800b
+#define SL_ERROR_INV_AP_INTERRUPT  0xc000800c
+#define SL_ERROR_INTEGER_OVERFLOW  0xc000800d
+#define SL_ERROR_HEAP_WALK 0xc000800e
+#define SL_ERROR_HEAP_MAP  0xc000800f
+#define SL_ERROR_REGION_ABOVE_4GB  0xc0008010
+#define SL_ERROR_HEAP_INVALID_DMAR 0xc0008011
+#define SL_ERROR_HEAP_DMAR_SIZE0xc0008012
+#define SL_ERROR_HEAP_DMAR_MAP 0xc0008013
+#define SL_ERROR_HI_PMR_BASE   0xc0008014
+#define SL_ERROR_HI_PMR_SIZE   0xc

[PATCH v7 10/13] kexec: Secure Launch kexec SEXIT support

2023-11-10 Thread Ross Philipson
Prior to running the next kernel via kexec, the Secure Launch code
closes down private SMX resources and does an SEXIT. This allows the
next kernel to start normally without any issues starting the APs etc.

Signed-off-by: Ross Philipson 
---
 arch/x86/kernel/slaunch.c | 73 +++
 kernel/kexec_core.c   |  4 +++
 2 files changed, 77 insertions(+)

diff --git a/arch/x86/kernel/slaunch.c b/arch/x86/kernel/slaunch.c
index cd5aa34e395c..32b0c24a6484 100644
--- a/arch/x86/kernel/slaunch.c
+++ b/arch/x86/kernel/slaunch.c
@@ -523,3 +523,76 @@ void __init slaunch_setup_txt(void)
 
pr_info("Intel TXT setup complete\n");
 }
+
+static inline void smx_getsec_sexit(void)
+{
+   asm volatile (".byte 0x0f,0x37\n"
+ : : "a" (SMX_X86_GETSEC_SEXIT));
+}
+
+/*
+ * Used during kexec and on reboot paths to finalize the TXT state
+ * and do an SEXIT exiting the DRTM and disabling SMX mode.
+ */
+void slaunch_finalize(int do_sexit)
+{
+   u64 one = TXT_REGVALUE_ONE, val;
+   void __iomem *config;
+
+   if ((slaunch_get_flags() & (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT)) !=
+   (SL_FLAG_ACTIVE | SL_FLAG_ARCH_TXT))
+   return;
+
+   config = ioremap(TXT_PRIV_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT private reqs\n");
+   return;
+   }
+
+   /* Clear secrets bit for SEXIT */
+   memcpy_toio(config + TXT_CR_CMD_NO_SECRETS, &one, sizeof(one));
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Unlock memory configurations */
+   memcpy_toio(config + TXT_CR_CMD_UNLOCK_MEM_CONFIG, &one, sizeof(one));
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   /* Close the TXT private register space */
+   memcpy_toio(config + TXT_CR_CMD_CLOSE_PRIVATE, &one, sizeof(one));
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   /*
+* Calls to iounmap are not being done because of the state of the
+* system this late in the kexec process. Local IRQs are disabled and
+* iounmap causes a TLB flush which in turn causes a warning. Leaving
+* thse mappings is not an issue since the next kernel is going to
+* completely re-setup memory management.
+*/
+
+   /* Map public registers and do a final read fence */
+   config = ioremap(TXT_PUB_CONFIG_REGS_BASE, TXT_NR_CONFIG_PAGES *
+PAGE_SIZE);
+   if (!config) {
+   pr_emerg("Error SEXIT failed to ioremap TXT public reqs\n");
+   return;
+   }
+
+   memcpy_fromio(&val, config + TXT_CR_E2STS, sizeof(val));
+
+   pr_emerg("TXT clear secrets bit and unlock memory complete.\n");
+
+   if (!do_sexit)
+   return;
+
+   if (smp_processor_id() != 0)
+   panic("Error TXT SEXIT must be called on CPU 0\n");
+
+   /* Disable SMX mode */
+   cr4_set_bits(X86_CR4_SMXE);
+
+   /* Do the SEXIT SMX operation */
+   smx_getsec_sexit();
+
+   pr_info("TXT SEXIT complete.\n");
+}
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index be5642a4ec49..98b2db21a952 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -40,6 +40,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -1264,6 +1265,9 @@ int kernel_kexec(void)
cpu_hotplug_enable();
pr_notice("Starting new kernel\n");
machine_shutdown();
+
+   /* Finalize TXT registers and do SEXIT */
+   slaunch_finalize(1);
}
 
kmsg_dump(KMSG_DUMP_SHUTDOWN);
-- 
2.39.3




[PATCH v7 06/13] x86: Add early SHA support for Secure Launch early measurements

2023-11-10 Thread Ross Philipson
From: "Daniel P. Smith" 

The SHA algorithms are necessary to measure configuration information into
the TPM as early as possible before using the values. This implementation
uses the established approach of #including the SHA libraries directly in
the code since the compressed kernel is not uncompressed at this point.

The SHA code here has its origins in the code from the main kernel:

commit c4d5b9ffa31f ("crypto: sha1 - implement base layer for SHA-1")

A modified version of this code was introduced to the lib/crypto/sha1.c
to bring it in line with the sha256 code and allow it to be pulled into the
setup kernel in the same manner as sha256 is.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 
---
 arch/x86/boot/compressed/Makefile   |  2 +
 arch/x86/boot/compressed/early_sha1.c   | 12 
 arch/x86/boot/compressed/early_sha256.c |  6 ++
 include/crypto/sha1.h   |  1 +
 lib/crypto/sha1.c   | 81 +
 5 files changed, 102 insertions(+)
 create mode 100644 arch/x86/boot/compressed/early_sha1.c
 create mode 100644 arch/x86/boot/compressed/early_sha256.c

diff --git a/arch/x86/boot/compressed/Makefile 
b/arch/x86/boot/compressed/Makefile
index 71fc531b95b4..07a2f56cd571 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -118,6 +118,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
 vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
 vmlinux-objs-$(CONFIG_EFI_STUB) += 
$(objtree)/drivers/firmware/efi/libstub/lib.a
 
+vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
$(obj)/early_sha256.o
+
 $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
$(call if_changed,ld)
 
diff --git a/arch/x86/boot/compressed/early_sha1.c 
b/arch/x86/boot/compressed/early_sha1.c
new file mode 100644
index ..0c7cf6f8157a
--- /dev/null
+++ b/arch/x86/boot/compressed/early_sha1.c
@@ -0,0 +1,12 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2022 Apertus Solutions, LLC.
+ */
+
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "../../../../lib/crypto/sha1.c"
diff --git a/arch/x86/boot/compressed/early_sha256.c 
b/arch/x86/boot/compressed/early_sha256.c
new file mode 100644
index ..54930166ffee
--- /dev/null
+++ b/arch/x86/boot/compressed/early_sha256.c
@@ -0,0 +1,6 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2022 Apertus Solutions, LLC
+ */
+
+#include "../../../../lib/crypto/sha256.c"
diff --git a/include/crypto/sha1.h b/include/crypto/sha1.h
index 044ecea60ac8..d715dd5332e1 100644
--- a/include/crypto/sha1.h
+++ b/include/crypto/sha1.h
@@ -42,5 +42,6 @@ extern int crypto_sha1_finup(struct shash_desc *desc, const 
u8 *data,
 #define SHA1_WORKSPACE_WORDS   16
 void sha1_init(__u32 *buf);
 void sha1_transform(__u32 *digest, const char *data, __u32 *W);
+void sha1(const u8 *data, unsigned int len, u8 *out);
 
 #endif /* _CRYPTO_SHA1_H */
diff --git a/lib/crypto/sha1.c b/lib/crypto/sha1.c
index 1aebe7be9401..10152125b338 100644
--- a/lib/crypto/sha1.c
+++ b/lib/crypto/sha1.c
@@ -137,4 +137,85 @@ void sha1_init(__u32 *buf)
 }
 EXPORT_SYMBOL(sha1_init);
 
+static void __sha1_transform(u32 *digest, const char *data)
+{
+   u32 ws[SHA1_WORKSPACE_WORDS];
+
+   sha1_transform(digest, data, ws);
+
+   memzero_explicit(ws, sizeof(ws));
+}
+
+static void sha1_update(struct sha1_state *sctx, const u8 *data, unsigned int 
len)
+{
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+
+   sctx->count += len;
+
+   if (likely((partial + len) >= SHA1_BLOCK_SIZE)) {
+   int blocks;
+
+   if (partial) {
+   int p = SHA1_BLOCK_SIZE - partial;
+
+   memcpy(sctx->buffer + partial, data, p);
+   data += p;
+   len -= p;
+
+   __sha1_transform(sctx->state, sctx->buffer);
+   }
+
+   blocks = len / SHA1_BLOCK_SIZE;
+   len %= SHA1_BLOCK_SIZE;
+
+   if (blocks) {
+   while (blocks--) {
+   __sha1_transform(sctx->state, data);
+   data += SHA1_BLOCK_SIZE;
+   }
+   }
+   partial = 0;
+   }
+
+   if (len)
+   memcpy(sctx->buffer + partial, data, len);
+}
+
+static void sha1_final(struct sha1_state *sctx, u8 *out)
+{
+   const int bit_offset = SHA1_BLOCK_SIZE - sizeof(__be64);
+   unsigned int partial = sctx->count % SHA1_BLOCK_SIZE;
+   __be64 *bits = (__be64 *)(sctx->buffer + bit_offset);
+   __be32 *digest = (__be32 *)out;
+   int i;
+
+   sctx->buffer[partial++] = 0x80;
+   if (partial > bit_offset) {
+   memset(sctx->buffer + partial, 0x0, SHA1_BLOCK_SIZE - partial);
+   partial = 0;
+
+

[PATCH v7 00/13] x86: Trenchboot secure dynamic launch Linux kernel support

2023-11-10 Thread Ross Philipson
The larger focus of the TrenchBoot project (https://github.com/TrenchBoot) is to
enhance the boot security and integrity in a unified manner. The first area of
focus has been on the Trusted Computing Group's Dynamic Launch for establishing
a hardware Root of Trust for Measurement, also know as DRTM (Dynamic Root of
Trust for Measurement). The project has been and continues to work on providing
a unified means to Dynamic Launch that is a cross-platform (Intel and AMD) and
cross-architecture (x86 and Arm), with our recent involvment in the upcoming
Arm DRTM specification. The order of introducing DRTM to the Linux kernel
follows the maturity of DRTM in the architectures. Intel's Trusted eXecution
Technology (TXT) is present today and only requires a preamble loader, e.g. a
boot loader, and an OS kernel that is TXT-aware. AMD DRTM implementation has
been present since the introduction of AMD-V but requires an additional
component that is AMD specific and referred to in the specification as the
Secure Loader, which the TrenchBoot project has an active prototype in
development. Finally Arm's implementation is in specification development stage
and the project is looking to support it when it becomes available.

This patchset provides detailed documentation of DRTM, the approach used for
adding the capbility, and relevant API/ABI documentation. In addition to the
documentation the patch set introduces Intel TXT support as the first platform
for Linux Secure Launch.

A quick note on terminology. The larger open source project itself is called
TrenchBoot, which is hosted on Github (links below). The kernel feature enabling
the use of Dynamic Launch technology is referred to as "Secure Launch" within
the kernel code. As such the prefixes sl_/SL_ or slaunch/SLAUNCH will be seen
in the code. The stub code discussed above is referred to as the SL stub.

The Secure Launch feature starts with patch #2. Patch #1 was authored by Arvind
Sankar. There is no further status on this patch at this point but
Secure Launch depends on it so it is included with the set.

## NOTE: EFI-STUB CONFLICTS

The primary focus of the v7 patch set was to align with Thomas Gleixner's
changes to support parallel CPU bring-up on x86 platforms. In the process of
rebasing and testing v7, it was discovered that there were significant changes
to the efi-stub code. As a result, the efi-stub patch was dropped pending
maintainer feedback on an appropriate means to re-integrate Secure Launch. The
primary goal being to best align the DL stub functionality with efi-stub design.

It was discovered that the efi-stub now subsumes all the setup which head_64.S
was responsible. When attempting to rebase the DL stub patch on these changes,
it became apparent that it would not be a simple relocation of the Secure Launch
call. There are numerous things, such as efi-stub decompressing the main line
kernel, which make simple relocation challenging. There may also be additional
changes that should be considered when integrating Secure Launch support. It
would be beneficial, and much appreciated, to obtain guidance from maintainers.
Upon successful collaboration with the efi-stub maintainers, a Secure Launch v8
series will be produce to re-introduce the DL stub patch.

Links:

The TrenchBoot project including documentation:

https://trenchboot.org

The TrenchBoot project on Github:

https://github.com/trenchboot

Intel TXT is documented in its own specification and in the SDM Instruction Set 
volume:

https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf
https://software.intel.com/en-us/articles/intel-sdm

AMD SKINIT is documented in the System Programming manual:

https://www.amd.com/system/files/TechDocs/24593.pdf

GRUB2 pre-launch support branch (WIP):

https://github.com/TrenchBoot/grub/tree/grub-sl-fc-38-dlstub

Patch set based on commit:

torvolds/master/6bc986ab839c844e78a2333a02e55f02c9e57935

Thanks
Ross Philipson and Daniel P. Smith

Changes in v2:

 - Modified 32b entry code to prevent causing relocations in the compressed
   kernel.
 - Dropped patches for compressed kernel TPM PCR extender.
 - Modified event log code to insert log delimiter events and not rely
   on TPM access.
 - Stop extending PCRs in the early Secure Launch stub code.
 - Removed Kconfig options for hash algorithms and use the algorithms the
   ACM used.
 - Match Secure Launch measurement algorithm use to those reported in the
   TPM 2.0 event log.
 - Read the TPM events out of the TPM and extend them into the PCRs using
   the mainline TPM driver. This is done in the late initcall module.
 - Allow use of alternate PCR 19 and 20 for post ACM measurements.
 - Add Kconfig constraints needed by Secure Launch (disable KASLR
   and add x2apic dependency).
 - Fix testing of SL_FLAGS when determining if Secure Launch is active
   and the architecture is TXT.
 - Use SYM_DATA_START_LOCAL macros in early entry

[PATCH v7 04/13] x86: Secure Launch Resource Table header file

2023-11-10 Thread Ross Philipson
Introduce the Secure Launch Resource Table which forms the formal
interface between the pre and post launch code.

Signed-off-by: Ross Philipson 
---
 include/linux/slr_table.h | 270 ++
 1 file changed, 270 insertions(+)
 create mode 100644 include/linux/slr_table.h

diff --git a/include/linux/slr_table.h b/include/linux/slr_table.h
new file mode 100644
index ..42020988233a
--- /dev/null
+++ b/include/linux/slr_table.h
@@ -0,0 +1,270 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * Secure Launch Resource Table
+ *
+ * Copyright (c) 2023, Oracle and/or its affiliates.
+ */
+
+#ifndef _LINUX_SLR_TABLE_H
+#define _LINUX_SLR_TABLE_H
+
+/* Put this in efi.h if it becomes a standard */
+#define SLR_TABLE_GUID EFI_GUID(0x877a9b2a, 0x0385, 
0x45d1, 0xa0, 0x34, 0x9d, 0xac, 0x9c, 0x9e, 0x56, 0x5f)
+
+/* SLR table header values */
+#define SLR_TABLE_MAGIC0x4452544d
+#define SLR_TABLE_REVISION 1
+
+/* Current revisions for the policy and UEFI config */
+#define SLR_POLICY_REVISION1
+#define SLR_UEFI_CONFIG_REVISION   1
+
+/* SLR defined architectures */
+#define SLR_INTEL_TXT  1
+#define SLR_AMD_SKINIT 2
+
+/* SLR defined bootloaders */
+#define SLR_BOOTLOADER_INVALID 0
+#define SLR_BOOTLOADER_GRUB1
+
+/* Log formats */
+#define SLR_DRTM_TPM12_LOG 1
+#define SLR_DRTM_TPM20_LOG 2
+
+/* DRTM Policy Entry Flags */
+#define SLR_POLICY_FLAG_MEASURED   0x1
+#define SLR_POLICY_IMPLICIT_SIZE   0x2
+
+/* Array Lengths */
+#define TPM_EVENT_INFO_LENGTH  32
+#define TXT_VARIABLE_MTRRS_LENGTH  32
+
+/* Tags */
+#define SLR_ENTRY_INVALID  0x
+#define SLR_ENTRY_DL_INFO  0x0001
+#define SLR_ENTRY_LOG_INFO 0x0002
+#define SLR_ENTRY_ENTRY_POLICY 0x0003
+#define SLR_ENTRY_INTEL_INFO   0x0004
+#define SLR_ENTRY_AMD_INFO 0x0005
+#define SLR_ENTRY_ARM_INFO 0x0006
+#define SLR_ENTRY_UEFI_INFO0x0007
+#define SLR_ENTRY_UEFI_CONFIG  0x0008
+#define SLR_ENTRY_END  0x
+
+/* Entity Types */
+#define SLR_ET_UNSPECIFIED 0x
+#define SLR_ET_SLRT0x0001
+#define SLR_ET_BOOT_PARAMS 0x0002
+#define SLR_ET_SETUP_DATA  0x0003
+#define SLR_ET_CMDLINE 0x0004
+#define SLR_ET_UEFI_MEMMAP 0x0005
+#define SLR_ET_RAMDISK 0x0006
+#define SLR_ET_TXT_OS2MLE  0x0010
+#define SLR_ET_UNUSED  0x
+
+#ifndef __ASSEMBLY__
+
+/*
+ * Primary SLR Table Header
+ */
+struct slr_table {
+   u32 magic;
+   u16 revision;
+   u16 architecture;
+   u32 size;
+   u32 max_size;
+   /* entries[] */
+} __packed;
+
+/*
+ * Common SLRT Table Header
+ */
+struct slr_entry_hdr {
+   u16 tag;
+   u16 size;
+} __packed;
+
+/*
+ * Boot loader context
+ */
+struct slr_bl_context {
+   u16 bootloader;
+   u16 reserved;
+   u64 context;
+} __packed;
+
+/*
+ * DRTM Dynamic Launch Configuration
+ */
+struct slr_entry_dl_info {
+   struct slr_entry_hdr hdr;
+   struct slr_bl_context bl_context;
+   u64 dl_handler;
+   u64 dce_base;
+   u32 dce_size;
+   u64 dlme_entry;
+} __packed;
+
+/*
+ * TPM Log Information
+ */
+struct slr_entry_log_info {
+   struct slr_entry_hdr hdr;
+   u16 format;
+   u16 reserved;
+   u64 addr;
+   u32 size;
+} __packed;
+
+/*
+ * DRTM Measurement Policy
+ */
+struct slr_entry_policy {
+   struct slr_entry_hdr hdr;
+   u16 revision;
+   u16 nr_entries;
+   /* policy_entries[] */
+} __packed;
+
+/*
+ * DRTM Measurement Entry
+ */
+struct slr_policy_entry {
+   u16 pcr;
+   u16 entity_type;
+   u16 flags;
+   u16 reserved;
+   u64 entity;
+   u64 size;
+   char evt_info[TPM_EVENT_INFO_LENGTH];
+} __packed;
+
+/*
+ * Secure Launch defined MTRR saving structures
+ */
+struct slr_txt_mtrr_pair {
+   u64 mtrr_physbase;
+   u64 mtrr_physmask;
+} __packed;
+
+struct slr_txt_mtrr_state {
+   u64 default_mem_type;
+   u64 mtrr_vcnt;
+   struct slr_txt_mtrr_pair mtrr_pair[TXT_VARIABLE_MTRRS_LENGTH];
+} __packed;
+
+/*
+ * Intel TXT Info table
+ */
+struct slr_entry_intel_info {
+   struct slr_entry_hdr hdr;
+   u64 saved_misc_enable_msr;
+   struct slr_txt_mtrr_state saved_bsp_mtrrs;
+} __packed;
+
+/*
+ * AMD SKINIT Info table
+ */
+struct slr_entry_amd_info {
+   struct slr_entry_hdr hdr;
+} __packed;
+
+/*
+ * ARM DRTM Info table
+ */
+struct slr_entry_arm_info {
+   struct slr_entry_hdr hdr;
+} __packed;
+
+struct slr_entry_uefi_config {
+   struct slr_entry_hdr hdr;
+   u16 revision;
+   u16 nr_entries;
+   /* uefi_cfg_entries[] */
+} __packed;
+
+struct slr_uefi_cfg_entry {
+   u16 pcr;
+   u16 reserved;
+   u64 cfg; /* address or value */
+   u32 size;
+   char evt_info[TPM_EVENT_INFO_LENGTH];
+} __packed;
+
+static inline void *slr_end_of_entrys(struct slr_table *table)
+{
+   return (((void *)table) + table

  1   2   >