Re: [PATCH 3/5] arm64: change the prototype of image probe function

2023-05-05 Thread Pingfan Liu
On Fri, May 5, 2023 at 11:42 PM Simon Horman  wrote:
>
> On Fri, May 05, 2023 at 10:54:35AM +0800, Pingfan Liu wrote:
> > Changing the aarch64 probe's prototype  from
> >   typedef int (probe_t)(const char *kernel_buf, off_t kernel_size);
> > to
> >   typedef int (probe_t)(const char *kernel_buf, off_t kernel_size, 
> > struct kexec_info *info);
> >
> > Later, info can be used to return both the file descriptor and parsed kernel
> > buffer. The fd is passed to sys_kexec_file_load, and the parsed kernel
> > buffer is used by image's load function.
> >
> > Signed-off-by: Pingfan Liu 
>
> Hi Pingfan,
>
> I am seeing a build failure on ARM (32bit).
>
>   138 | int zImage_arm_probe(const char *UNUSED(buf), off_t UNUSED(len))
>   | ^~~~
> In file included from ../../kexec/arch/arm/kexec-zImage-arm.c:21:
> ../../kexec/arch/arm/kexec-arm.h:12:5: note: previous declaration of 
> ‘zImage_arm_probe’ was here
>12 | int zImage_arm_probe(const char *buf, off_t len, struct kexec_info 
> *info);
>   | ^~~~
> make[1]: *** [Makefile:124: kexec/arch/arm/kexec-zImage-arm.o] Error 1
> make[1]: *** Waiting for unfinished jobs
> make[1]: Leaving directory 
> '/home/runner/work/kexec-tools/kexec-tools/kexec-tools-2.0.26.git/_build/sub'
> make: *** [Makefile:276: distcheck] Error 2
> Error: Process completed with exit code 2.
>

Oops. I just tested on x86_64 and aarch64, and did not detect this bug.

> Link: 
> https://github.com/horms/kexec-tools/actions/runs/4895124719/jobs/8740272103
>
> ...
>
> > diff --git a/kexec/kexec.h b/kexec/kexec.h
> > index 0d820ad..6e8430e 100644
> > --- a/kexec/kexec.h
> > +++ b/kexec/kexec.h
> > @@ -191,7 +191,13 @@ unsigned long locate_hole(struct kexec_info *info,
> >   unsigned long hole_min, unsigned long hole_max,
> >   int hole_end);
> >
> > +#ifndef __aarch64__
> >  typedef int (probe_t)(const char *kernel_buf, off_t kernel_size);
> > +#else
> > +typedef int (probe_t)(const char *kern_fname, off_t kernel_size,
> > + struct kexec_info *info);
> > +#endif
> > +
>
> This seems kind of unfortunate.
> Could we update the prototype for all architectures?
>

I will have a try with coccinelle.

Thanks for your help.

Regards,

Pingfan


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH 0/5] arm64: zboot support

2023-05-05 Thread Pingfan Liu
On Sat, May 6, 2023 at 6:38 AM Jeremy Linton  wrote:
>
> On 5/4/23 21:54, Pingfan Liu wrote:
> > As more complicated capsule kernel format occurs like zboot, where the
> > compressed kernel is stored as a payload. The straight forward
> > decompression can not meet the demand.
> >
> > As the first step, on aarch64, reading in the kernel file in a probe
> > method and decide how to unfold the content by the method itself.
> >
> > The new designed probe interface returns two factors:
> > 1. the parsed kernel_buf should be returned so that it can be used by
> > the image load method later.
> > 2. the final fd passed to sys_kexec_file_load, since aarch64 kernel can
> > only work with Image format, the outer payload should be stripped and a
> > temporary file of Image should be created.
> >
>
> Well this is unfortunate. I pinged you a few days back but I guess it
> must have gotten lost in the noise.
>

Sorry for that.

> I'm ok with dropping my set, but I don't see your 5/5, and it doesn't
> show up in the archive either. Which presumably is the core ZBOOT
> functionality. From what I can tell without that, this set then is all
> just cleanup. So did you do that on purpose, if so maybe its best then
> to just rebase on top of my set?
>

I forgot to add the CC in that patch, and it turns out to be sent to
myself only. I have re-sent it.

I am happy to rebase on your set with some adaptation to the new probe
interface.

>
> Given I'm not a regular contributor here I'm not sure my opinion matters
> much, but one of the things that bothered me when I was yak shaving my
> set, were the #ifdef aarch64 bits in the common code. I considered
> attempting to remove it but that was a bit more involved than I wanted
> to get. But, as Ard correctly points out, the zboot is being used by
> more than aarch64 at this point. So that is also a argument for assuring
> whatever the probe()/etc interface is, that its not #ifdefed.
>

Yes, it makes code look not neat. I have tried to tackle it with sed,
but failed. I will try to achieve it with coccinelle.

Thanks,

Pingfan


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


[PATCH 5/5] arm64: add support for zboot image

2023-05-05 Thread Pingfan Liu
zboot image consists of zboot_header and Image.gz. And the compressed
payload should be located and parsed with extra effort. Most of
important, the kernel can only work with Image, so the final fd should
point to a temporary file, which contains Image.

Signed-off-by: Pingfan Liu 
To: kexec@lists.infradead.org
Cc: ho...@verge.net.au
Cc: a...@kernel.org
Cc: jeremy.lin...@arm.com

---
 kexec/arch/arm64/Makefile|   3 +-
 kexec/arch/arm64/kexec-arm64.c   |   1 +
 kexec/arch/arm64/kexec-arm64.h   |   5 +
 kexec/arch/arm64/kexec-zboot-arm64.c | 261 +++
 kexec/arch/arm64/zboot.h |  26 +++
 5 files changed, 295 insertions(+), 1 deletion(-)
 create mode 100644 kexec/arch/arm64/kexec-zboot-arm64.c
 create mode 100644 kexec/arch/arm64/zboot.h

diff --git a/kexec/arch/arm64/Makefile b/kexec/arch/arm64/Makefile
index d27c8ee..7826a36 100644
--- a/kexec/arch/arm64/Makefile
+++ b/kexec/arch/arm64/Makefile
@@ -16,7 +16,8 @@ arm64_KEXEC_SRCS += \
kexec/arch/arm64/kexec-elf-arm64.c \
kexec/arch/arm64/kexec-uImage-arm64.c \
kexec/arch/arm64/kexec-image-arm64.c \
-   kexec/arch/arm64/kexec-zImage-arm64.c
+   kexec/arch/arm64/kexec-zImage-arm64.c \
+   kexec/arch/arm64/kexec-zboot-arm64.c
 
 arm64_UIMAGE = kexec/kexec-uImage.c
 
diff --git a/kexec/arch/arm64/kexec-arm64.c b/kexec/arch/arm64/kexec-arm64.c
index ec6df4b..88b5f57 100644
--- a/kexec/arch/arm64/kexec-arm64.c
+++ b/kexec/arch/arm64/kexec-arm64.c
@@ -72,6 +72,7 @@ const struct arch_map_entry arches[] = {
 
 struct file_type file_type[] = {
{"vmlinux", elf_arm64_probe, elf_arm64_load, elf_arm64_usage},
+   {"zboot", zboot_arm64_probe, zboot_arm64_load, zboot_arm64_usage},
{"Image", image_arm64_probe, image_arm64_load, image_arm64_usage},
{"uImage", uImage_arm64_probe, uImage_arm64_load, uImage_arm64_usage},
{"zImage", zImage_arm64_probe, zImage_arm64_load, zImage_arm64_usage},
diff --git a/kexec/arch/arm64/kexec-arm64.h b/kexec/arch/arm64/kexec-arm64.h
index 88bb508..98e1be9 100644
--- a/kexec/arch/arm64/kexec-arm64.h
+++ b/kexec/arch/arm64/kexec-arm64.h
@@ -50,6 +50,11 @@ int zImage_arm64_load(int argc, char **argv, const char 
*kernel_buf,
 void zImage_arm64_usage(void);
 
 
+int zboot_arm64_probe(const char *kernel_buf, off_t kernel_size, struct 
kexec_info *info);
+int zboot_arm64_load(int argc, char **argv, const char *kernel_buf,
+   off_t kernel_size, struct kexec_info *info);
+void zboot_arm64_usage(void);
+
 extern off_t initrd_base;
 extern off_t initrd_size;
 
diff --git a/kexec/arch/arm64/kexec-zboot-arm64.c 
b/kexec/arch/arm64/kexec-zboot-arm64.c
new file mode 100644
index 000..b6c4fe3
--- /dev/null
+++ b/kexec/arch/arm64/kexec-zboot-arm64.c
@@ -0,0 +1,261 @@
+/*
+ * ARM64 kexec zboot Image support.
+ *
+ * Based on kexec-zImage-arm64.c
+ */
+
+#define _GNU_SOURCE
+
+#include 
+#include 
+#include 
+#include 
+#include "crashdump-arm64.h"
+#include "image-header.h"
+#include "kexec.h"
+#include "kexec-arm64.h"
+#include "kexec-syscall.h"
+#include "kexec-zlib.h"
+#include "zboot.h"
+#include "arch/options.h"
+
+#define MZ_MAGIC   0x5a4d  /* "MZ" */
+#define FILENAME_IMAGE_GZ  "/tmp/zboot_Image_gzXX"
+#define FILENAME_IMAGE "/tmp/zboot_ImageXX"
+
+
+static struct zboot_image_header *check_zboot_header(char *buf, off_t size)
+{
+
+   uint32_t magic = MZ_MAGIC;
+   struct zboot_image_header *zh =
+   (struct zboot_image_header *)(buf);
+
+   if (!zh || (size < sizeof(*zh)))
+   return NULL;
+
+   if (memcmp(>magic, , sizeof(zh->magic))) {
+   dbgprintf("%s: Not an zboot Image\n", __func__);
+   return NULL;
+   }
+   if (strncmp(zh->zimg, "zimg", 4)) {
+   dbgprintf("%s: Not an zboot Image\n", __func__);
+   return NULL;
+   }
+   return zh;
+}
+
+/* Returns:
+ * -1 : in case of error/invalid format.
+ */
+int zboot_arm64_probe(const char *kern_fname, off_t kernel_size, struct 
kexec_info *info)
+{
+   int ret = 0;
+   int fd = 0, gz_fd = 0;
+   int kernel_fd = 0;
+   char *fname = NULL, *gz_fname;
+   char *kernel_buf, *uncompressed_buf = NULL;
+   char *compressed_kernel;
+   const struct arm64_image_header *h;
+   struct zboot_image_header *zh;
+
+   kernel_buf = slurp_file(kern_fname, _size);
+   zh = check_zboot_header(kernel_buf, kernel_size);
+   if (!zh)
+   return -1;
+
+   if (!(gz_fname = strdup(FILENAME_IMAGE_GZ))) {
+   dbgprintf("%s: Can't duplicate strings %s\n", __func__,
+   gz_fname);
+   return -1;
+   }
+
+   if ((gz_fd = mkstemp(gz_fname)) < 0) {
+   dbgprintf("%s: Can't open file %s\n", __func__,
+   gz_fname);
+   ret = -1;
+   goto fail_mkstemp;
+   }
+
+   /* locate the 

Re: [PATCH 0/5] arm64: zboot support

2023-05-05 Thread Jeremy Linton

On 5/4/23 21:54, Pingfan Liu wrote:

As more complicated capsule kernel format occurs like zboot, where the
compressed kernel is stored as a payload. The straight forward
decompression can not meet the demand.

As the first step, on aarch64, reading in the kernel file in a probe
method and decide how to unfold the content by the method itself.

The new designed probe interface returns two factors:
1. the parsed kernel_buf should be returned so that it can be used by
the image load method later.
2. the final fd passed to sys_kexec_file_load, since aarch64 kernel can
only work with Image format, the outer payload should be stripped and a
temporary file of Image should be created.



Well this is unfortunate. I pinged you a few days back but I guess it 
must have gotten lost in the noise.


I'm ok with dropping my set, but I don't see your 5/5, and it doesn't 
show up in the archive either. Which presumably is the core ZBOOT 
functionality. From what I can tell without that, this set then is all 
just cleanup. So did you do that on purpose, if so maybe its best then 
to just rebase on top of my set?



Given I'm not a regular contributor here I'm not sure my opinion matters 
much, but one of the things that bothered me when I was yak shaving my 
set, were the #ifdef aarch64 bits in the common code. I considered 
attempting to remove it but that was a bit more involved than I wanted 
to get. But, as Ard correctly points out, the zboot is being used by 
more than aarch64 at this point. So that is also a argument for assuring 
whatever the probe()/etc interface is, that its not #ifdefed.







To: kexec@lists.infradead.org
Cc: ho...@verge.net.au
Cc: a...@kernel.org
Cc: jeremy.lin...@arm.com

Pingfan Liu (5):
   kexec: Adding missing free for kernel_buf
   arm64/zImage: Remove unnecessary allocation for
 kernel_uncompressed_buf
   arm64: change the prototype of image probe function
   arm64: Scatter the reading of kernel file into each probe
   arm64: add support for zboot image

  kexec/arch/arm/kexec-arm.h|   4 +-
  kexec/arch/arm/kexec-uImage-arm.c |   2 +-
  kexec/arch/arm64/Makefile |   3 +-
  kexec/arch/arm64/kexec-arm64.c|   1 +
  kexec/arch/arm64/kexec-arm64.h|  13 +-
  kexec/arch/arm64/kexec-elf-arm64.c|   7 +-
  kexec/arch/arm64/kexec-image-arm64.c  |   6 +-
  kexec/arch/arm64/kexec-uImage-arm64.c |  17 +-
  kexec/arch/arm64/kexec-zImage-arm64.c |  23 +--
  kexec/arch/arm64/kexec-zboot-arm64.c  | 261 ++
  kexec/arch/arm64/zboot.h  |  26 +++
  kexec/kexec.c |  48 +++--
  kexec/kexec.h |   8 +
  13 files changed, 377 insertions(+), 42 deletions(-)
  create mode 100644 kexec/arch/arm64/kexec-zboot-arm64.c
  create mode 100644 kexec/arch/arm64/zboot.h




___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v6 07/14] x86: Secure Launch kernel early boot stub

2023-05-05 Thread Simon Horman
On Fri, May 05, 2023 at 02:58:28PM -0400, Ross Philipson wrote:
> On 5/5/23 13:47, Simon Horman wrote:
> > On Thu, May 04, 2023 at 02:50:16PM +, Ross Philipson wrote:

...

> > > +asmlinkage __visible void sl_check_region(void *base, u32 size)
> > > +{
> > > + sl_check_pmr_coverage(base, size, false);
> > > +}
> > 
> > I'm a nit unsure, what to do here, but clang-16 with W=1 says the following.
> > 
> > arch/x86/boot/compressed/sl_main.c:533:27: warning: no previous prototype 
> > for function 'sl_main' [-Wmissing-prototypes]
> > asmlinkage __visible void sl_main(void *bootparams)
> >^
> > arch/x86/boot/compressed/sl_main.c:533:22: note: declare 'static' if the 
> > function is not intended to be used outside of this translation unit
> > asmlinkage __visible void sl_main(void *bootparams)
> >   ^
> >   static
> 
> Yea we will have to look into why this is. This function is only ever called
> from asm code so that might have something to do with this.

Thanks.

...

> > > diff --git a/arch/x86/include/uapi/asm/bootparam.h 
> > > b/arch/x86/include/uapi/asm/bootparam.h
> > > index 01d19fc..74e3e7df 100644
> > > --- a/arch/x86/include/uapi/asm/bootparam.h
> > > +++ b/arch/x86/include/uapi/asm/bootparam.h
> > > @@ -26,6 +26,7 @@
> > >   /* loadflags */
> > >   #define LOADED_HIGH (1<<0)
> > >   #define KASLR_FLAG  (1<<1)
> > > +#define SLAUNCH_FLAG (1<<2)
> > >   #define QUIET_FLAG  (1<<5)
> > >   #define KEEP_SEGMENTS   (1<<6)
> > >   #define CAN_USE_HEAP(1<<7)
> > 
> > nit: please consider using BIT()
> 
> I am a little reluctant to change something like this in an existing header.
> It seems a bit out of scope for the patch set.

Yes, sorry for the noise on this one.
I agree that what you have is the best approach here.

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v6 12/14] x86: Secure Launch late initcall platform module

2023-05-05 Thread Simon Horman
On Thu, May 04, 2023 at 02:50:21PM +, Ross Philipson wrote:
> From: "Daniel P. Smith" 
> 
> The Secure Launch platform module is a late init module. During the
> init call, the TPM event log is read and measurements taken in the
> early boot stub code are located. These measurements are extended
> into the TPM PCRs using the mainline TPM kernel driver.
> 
> The platform module also registers the securityfs nodes to allow
> access to TXT register fields on Intel along with the fetching of
> and writing events to the late launch TPM log.
> 
> Signed-off-by: Daniel P. Smith 
> Signed-off-by: garnetgrimm 
> Signed-off-by: Ross Philipson 

Hi Ross,

a few more items from my side.

...

> diff --git a/arch/x86/kernel/slmodule.c b/arch/x86/kernel/slmodule.c

...

> +/*
> + * Securityfs exposure
> + */
> +struct memfile {
> + char *name;
> + void *addr;
> + size_t size;
> +};
> +
> +static struct memfile sl_evtlog = {"eventlog", 0, 0};

I don't think the 0 fields are necessary above, memset will zero
any fields not explicitly set. But if you want to go that way, then
I think the first one should be NULL, as the addr field is a pointer.

> +static void *txt_heap;
> +static struct txt_heap_event_log_pointer2_1_element __iomem *evtlog20;
> +static DEFINE_MUTEX(sl_evt_log_mutex);

> +static ssize_t sl_evtlog_read(struct file *file, char __user *buf,
> +   size_t count, loff_t *pos)
> +{
> + ssize_t size;
> +
> + if (!sl_evtlog.addr)
> + return 0;
> +
> + mutex_lock(_evt_log_mutex);
> + size = simple_read_from_buffer(buf, count, pos, sl_evtlog.addr,
> +sl_evtlog.size);
> + mutex_unlock(_evt_log_mutex);
> +
> + return size;
> +}
> +
> +static ssize_t sl_evtlog_write(struct file *file, const char __user *buf,
> + size_t datalen, loff_t *ppos)

nit: the line above doesn't align to the '(' on the line before that.

> +{
> + ssize_t result;
> + char *data;
> +
> + if (!sl_evtlog.addr)
> + return 0;
> +
> + /* No partial writes. */
> + result = -EINVAL;
> + if (*ppos != 0)
> + goto out;
> +
> + data = memdup_user(buf, datalen);
> + if (IS_ERR(data)) {
> + result = PTR_ERR(data);
> + goto out;
> + }
> +
> + mutex_lock(_evt_log_mutex);
> + if (evtlog20)
> + result = tpm20_log_event(evtlog20, sl_evtlog.addr,
> +  sl_evtlog.size, datalen, data);

Sparse says that the type of the first argument of tmp20_log_event is:

struct txt_heap_event_log_pointer2_1_element *

However, the type of evtlog20 is:

struct txt_heap_event_log_pointer2_1_element __iomem *

> + else
> + result = tpm12_log_event(sl_evtlog.addr, sl_evtlog.size,
> +  datalen, data);
> + mutex_unlock(_evt_log_mutex);
> +
> + kfree(data);
> +out:
> + return result;
> +}

...

> +static long slaunch_expose_securityfs(void)
> +{
> + long ret = 0;
> + int i;
> +
> + slaunch_dir = securityfs_create_dir("slaunch", NULL);
> + if (IS_ERR(slaunch_dir))
> + return PTR_ERR(slaunch_dir);
> +
> + if (slaunch_get_flags() & SL_FLAG_ARCH_TXT) {
> + txt_dir = securityfs_create_dir("txt", slaunch_dir);
> + if (IS_ERR(txt_dir)) {
> + ret = PTR_ERR(txt_dir);
> + goto remove_slaunch;
> + }
> +
> + for (i = 0; i < ARRAY_SIZE(sl_txt_files); i++) {
> + txt_entries[i] = securityfs_create_file(
> + sl_txt_files[i].name, 0440,
> + txt_dir, NULL,
> + sl_txt_files[i].fops);
> + if (IS_ERR(txt_entries[i])) {
> + ret = PTR_ERR(txt_entries[i]);
> + goto remove_files;
> + }
> + }
> +

nit: no blank line here.

> + }
> +
> + if (sl_evtlog.addr > 0) {

addr is a pointer. So perhaps:

if (sl_evtlog.addr) {

> + event_file = securityfs_create_file(
> + sl_evtlog.name, 0440,
> + slaunch_dir, NULL,
> + _evtlog_ops);
> + if (IS_ERR(event_file)) {
> + ret = PTR_ERR(event_file);
> + goto remove_files;
> + }
> + }
> +
> + return 0;
> +
> +remove_files:
> + if (slaunch_get_flags() & SL_FLAG_ARCH_TXT) {
> + while (--i >= 0)
> + securityfs_remove(txt_entries[i]);
> + securityfs_remove(txt_dir);
> + }
> +remove_slaunch:
> + securityfs_remove(slaunch_dir);
> +
> + return ret;
> +}

...

> +static void slaunch_intel_evtlog(void 

Re: [PATCH v6 08/14] x86: Secure Launch kernel late boot stub

2023-05-05 Thread Ross Philipson

On 5/5/23 13:52, Simon Horman wrote:

On Thu, May 04, 2023 at 02:50:17PM +, Ross Philipson wrote:

The routine slaunch_setup is called out of the x86 specific setup_arch
routine during early kernel boot. After determining what platform is
present, various operations specific to that platform occur. This
includes finalizing setting for the platform late launch and verifying
that memory protections are in place.

For TXT, this code also reserves the original compressed kernel setup
area where the APs were left looping so that this memory cannot be used.

Signed-off-by: Ross Philipson 


Hi Ross,

a few nits from my side.


Yup we will fix all of those.

Thanks
Ross




+/*
+ * The TXT heap is too big to map all at once with early_ioremap
+ * so it is done a table at a time.
+ */
+static void __init *txt_early_get_heap_table(void __iomem *txt, u32 type,
+u32 bytes)
+{
+   u64 base, size, offset = 0;
+   void *heap;
+   int i;
+
+   if (type > TXT_SINIT_TABLE_MAX)
+   slaunch_txt_reset(txt,
+   "Error invalid table type for early heap walk\n",
+   SL_ERROR_HEAP_WALK);


nit: the indentation should align to the opening '('.

slaunch_txt_reset(txt,
  "Error invalid table type for early heap 
walk\n",
  SL_ERROR_HEAP_WALK);

Likewise in a few other places in this patch.

...


+static void __init slaunch_txt_reserve_range(u64 base, u64 size)
+{
+   int type;
+
+   type = e820__get_entry_type(base, base + size - 1);
+   if (type == E820_TYPE_RAM) {
+   pr_info("memblock reserve base: %llx size: %llx\n", base, size);
+   memblock_reserve(base, size);
+   }
+}
+
+/*
+ * For Intel, certain regions of memory must be marked as reserved by putting
+ * them on the memblock reserved list if they are not already e820 reserved.
+ * This includes:
+ *  - The TXT HEAP
+ *  - The ACM area
+ *  - The TXT private register bank
+ *  - The MDR list sent to the MLE by the ACM (see TXT specification)
+ *  (Normally the above are properly reserved by firmware but if it was not
+ *  done, reserve them now)
+ *  - The AP wake block
+ *  - TPM log external to the TXT heap
+ *
+ * Also if the low PMR doesn't cover all memory < 4G, any RAM regions above
+ * the low PMR must be reservered too.


nit: s/reservered/reserved/


+ */
+static void __init slaunch_txt_reserve(void __iomem *txt)
+{
+   struct txt_sinit_memory_descriptor_record *mdr;
+   struct txt_sinit_mle_data *sinit_mle_data;
+   u64 base, size, heap_base, heap_size;
+   u32 mdrnum, mdroffset, mdrslen;
+   u32 field_offset, i;
+   void *mdrs;
+
+   base = TXT_PRIV_CONFIG_REGS_BASE;
+   size = TXT_PUB_CONFIG_REGS_BASE - TXT_PRIV_CONFIG_REGS_BASE;
+   slaunch_txt_reserve_range(base, size);
+
+   memcpy_fromio(_base, txt + TXT_CR_HEAP_BASE, sizeof(heap_base));
+   memcpy_fromio(_size, txt + TXT_CR_HEAP_SIZE, sizeof(heap_size));
+   slaunch_txt_reserve_range(heap_base, heap_size);
+
+   memcpy_fromio(, txt + TXT_CR_SINIT_BASE, sizeof(base));
+   memcpy_fromio(, txt + TXT_CR_SINIT_SIZE, sizeof(size));
+   slaunch_txt_reserve_range(base, size);
+
+   field_offset = offsetof(struct txt_sinit_mle_data,
+   sinit_vtd_dmar_table_size);
+   sinit_mle_data = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE,
+ field_offset);
+
+   mdrnum = sinit_mle_data->num_of_sinit_mdrs;
+   mdroffset = sinit_mle_data->sinit_mdrs_table_offset;
+
+   txt_early_put_heap_table(sinit_mle_data, field_offset);
+
+   if (!mdrnum)
+   goto nomdr;
+
+   mdrslen = mdrnum * sizeof(struct txt_sinit_memory_descriptor_record);
+
+   mdrs = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE,
+   mdroffset + mdrslen - 8);
+
+   mdr = mdrs + mdroffset - 8;
+
+   for (i = 0; i < mdrnum; i++, mdr++) {
+   /* Spec says some entries can have length 0, ignore them */
+   if (mdr->type > 0 && mdr->length > 0)
+   slaunch_txt_reserve_range(mdr->address, mdr->length);
+   }
+
+   txt_early_put_heap_table(mdrs, mdroffset + mdrslen - 8);
+
+nomdr:
+   slaunch_txt_reserve_range(ap_wake_info.ap_wake_block,
+ ap_wake_info.ap_wake_block_size);
+
+   /*
+* Earlier checks ensured that the event log was properly situated
+* either inside the TXT heap or outside. This is a check to see if the
+* event log needs to be reserved. If it is in the TXT heap, it is
+* already reserved.
+*/
+   if (evtlog_addr < heap_base || evtlog_addr > (heap_base + heap_size))
+   slaunch_txt_reserve_range(evtlog_addr, evtlog_size);
+
+   for (i = 

Re: [PATCH v6 09/14] x86: Secure Launch SMP bringup support

2023-05-05 Thread Ross Philipson

On 5/5/23 13:54, Simon Horman wrote:

On Thu, May 04, 2023 at 02:50:18PM +, Ross Philipson wrote:

On Intel, the APs are left in a well documented state after TXT performs
the late launch. Specifically they cannot have #INIT asserted on them so
a standard startup via INIT/SIPI/SIPI cannot be performed. Instead the
early SL stub code parked the APs in a pause/jmp loop waiting for an NMI.
The modified SMP boot code is called for the Secure Launch case. The
jump address for the RM piggy entry point is fixed up in the jump where
the APs are waiting and an NMI IPI is sent to the AP. The AP vectors to
the Secure Launch entry point in the RM piggy which mimics what the real
mode code would do then jumps to the standard RM piggy protected mode
entry point.

Signed-off-by: Ross Philipson 


Hi Ross,

just one minor nit on this one.


Will fix, thanks.
Ross




  /*
   * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
   * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
@@ -1132,6 +1210,13 @@ static int do_boot_cpu(int apicid, int cpu, struct 
task_struct *idle,
cpumask_clear_cpu(cpu, cpu_initialized_mask);
smp_mb();
  
+	/* With Intel TXT, the AP startup is totally different */

+   if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) ==


nit: spaces around '|'


+  (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) {
+   boot_error = slaunch_wakeup_cpu_from_txt(cpu, apicid);
+   goto txt_wake;
+   }



___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v6 07/14] x86: Secure Launch kernel early boot stub

2023-05-05 Thread Ross Philipson

On 5/5/23 13:47, Simon Horman wrote:

On Thu, May 04, 2023 at 02:50:16PM +, Ross Philipson wrote:

The Secure Launch (SL) stub provides the entry point for Intel TXT (and
later AMD SKINIT) to vector to during the late launch. The symbol
sl_stub_entry is that entry point and its offset into the kernel is
conveyed to the launching code using the MLE (Measured Launch
Environment) header in the structure named mle_header. The offset of the
MLE header is set in the kernel_info. The routine sl_stub contains the
very early late launch setup code responsible for setting up the basic
environment to allow the normal kernel startup_32 code to proceed. It is
also responsible for properly waking and handling the APs on Intel
platforms. The routine sl_main which runs after entering 64b mode is
responsible for measuring configuration and module information before
it is used like the boot params, the kernel command line, the TXT heap,
an external initramfs, etc.

Signed-off-by: Ross Philipson 


...


diff --git a/arch/x86/boot/compressed/sl_main.c 
b/arch/x86/boot/compressed/sl_main.c


...


+static void *evtlog_base;
+static u32 evtlog_size;
+static struct txt_heap_event_log_pointer2_1_element *log20_elem;
+static u32 tpm_log_ver = SL_TPM12_LOG;
+struct tcg_efi_specid_event_algs tpm_algs[SL_TPM20_MAX_ALGS] = {0};


tpm_algs seems to only be used in this file.
Should it be static?


+
+extern u32 sl_cpu_type;
+extern u32 sl_mle_start;
+extern struct boot_params *boot_params;
+
+static u64 sl_txt_read(u32 reg)


Perhaps reg should have an __iomem annotation.


+{
+   return readq((void *)(u64)(TXT_PRIV_CONFIG_REGS_BASE + reg));
+}
+
+static void sl_txt_write(u32 reg, u64 val)


Likewise here.

...


+static void sl_check_pmr_coverage(void *base, u32 size, bool allow_hi)
+{
+   struct txt_os_sinit_data *os_sinit_data;
+   void *end = base + size;
+   void *txt_heap;
+
+   if (!(sl_cpu_type & SL_CPU_INTEL))
+   return;
+
+   txt_heap = (void *)sl_txt_read(TXT_CR_HEAP_BASE);
+   os_sinit_data = txt_os_sinit_data_start(txt_heap);
+
+   if ((end >= (void *)0x1ULL) &&
+   (base < (void *)0x1ULL))
+   sl_txt_reset(SL_ERROR_REGION_STRADDLE_4GB);
+
+   /*
+* Note that the late stub code validates that the hi PMR covers
+* all memory above 4G. At this point the code can only check that
+* regions are within the hi PMR but that is sufficient.
+*/
+   if ((end > (void *)0x1ULL) &&
+   (base >= (void *)0x1ULL)) {
+   if (allow_hi) {
+   if (end >= (void *)(os_sinit_data->vtd_pmr_hi_base +
+  os_sinit_data->vtd_pmr_hi_size))
+   sl_txt_reset(SL_ERROR_BUFFER_BEYOND_PMR);
+   } else
+   sl_txt_reset(SL_ERROR_REGION_ABOVE_4GB);


nit: if any arm of a condition has '{}' then all arms should have them.
  So:

} else {
sl_txt_reset(SL_ERROR_REGION_ABOVE_4GB);
}

Also elsewhere in this patch.


+   }
+
+   if (end >= (void *)os_sinit_data->vtd_pmr_lo_size)
+   sl_txt_reset(SL_ERROR_BUFFER_BEYOND_PMR);
+}
+
+/*
+ * Some MSRs are modified by the pre-launch code including the MTRRs.
+ * The early MLE code has to restore these values. This code validates
+ * the values after they are measured.
+ */
+static void sl_txt_validate_msrs(struct txt_os_mle_data *os_mle_data)
+{
+   struct slr_txt_mtrr_state *saved_bsp_mtrrs;
+   u64 mtrr_caps, mtrr_def_type, mtrr_var;
+   struct slr_entry_intel_info *txt_info;
+   u64 misc_en_msr;
+   u32 vcnt, i;
+
+   txt_info = (struct slr_entry_intel_info *)os_mle_data->txt_info;
+   saved_bsp_mtrrs = &(txt_info->saved_bsp_mtrrs);


nit: unnecessary parentheses

...


+static void sl_validate_event_log_buffer(void)
+{
+   struct txt_os_sinit_data *os_sinit_data;
+   void *txt_heap, *txt_end;
+   void *mle_base, *mle_end;
+   void *evtlog_end;
+
+   if ((u64)evtlog_size > (LLONG_MAX - (u64)evtlog_base))
+   sl_txt_reset(SL_ERROR_INTEGER_OVERFLOW);
+   evtlog_end = evtlog_base + evtlog_size;
+
+   txt_heap = (void *)sl_txt_read(TXT_CR_HEAP_BASE);
+   txt_end = txt_heap + sl_txt_read(TXT_CR_HEAP_SIZE);
+   os_sinit_data = txt_os_sinit_data_start(txt_heap);
+
+   mle_base = (void *)(u64)sl_mle_start;
+   mle_end = mle_base + os_sinit_data->mle_size;
+
+   /*
+* This check is to ensure the event log buffer does not overlap with
+* the MLE image.
+*/
+   if ((evtlog_base >= mle_end) &&
+   (evtlog_end > mle_end))
+   goto pmr_check; /* above */


Ditto.
Also, the if condition could be one line.
Also in several other places in this patch.


+
+   if ((evtlog_end <= mle_base) &&
+   (evtlog_base < mle_base))
+  

Re: [PATCH v6 09/14] x86: Secure Launch SMP bringup support

2023-05-05 Thread Simon Horman
On Thu, May 04, 2023 at 02:50:18PM +, Ross Philipson wrote:
> On Intel, the APs are left in a well documented state after TXT performs
> the late launch. Specifically they cannot have #INIT asserted on them so
> a standard startup via INIT/SIPI/SIPI cannot be performed. Instead the
> early SL stub code parked the APs in a pause/jmp loop waiting for an NMI.
> The modified SMP boot code is called for the Secure Launch case. The
> jump address for the RM piggy entry point is fixed up in the jump where
> the APs are waiting and an NMI IPI is sent to the AP. The AP vectors to
> the Secure Launch entry point in the RM piggy which mimics what the real
> mode code would do then jumps to the standard RM piggy protected mode
> entry point.
> 
> Signed-off-by: Ross Philipson 

Hi Ross,

just one minor nit on this one.

>  /*
>   * NOTE - on most systems this is a PHYSICAL apic ID, but on multiquad
>   * (ie clustered apic addressing mode), this is a LOGICAL apic ID.
> @@ -1132,6 +1210,13 @@ static int do_boot_cpu(int apicid, int cpu, struct 
> task_struct *idle,
>   cpumask_clear_cpu(cpu, cpu_initialized_mask);
>   smp_mb();
>  
> + /* With Intel TXT, the AP startup is totally different */
> + if ((slaunch_get_flags() & (SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) ==

nit: spaces around '|'

> +(SL_FLAG_ACTIVE|SL_FLAG_ARCH_TXT)) {
> + boot_error = slaunch_wakeup_cpu_from_txt(cpu, apicid);
> + goto txt_wake;
> + }

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v6 08/14] x86: Secure Launch kernel late boot stub

2023-05-05 Thread Simon Horman
On Thu, May 04, 2023 at 02:50:17PM +, Ross Philipson wrote:
> The routine slaunch_setup is called out of the x86 specific setup_arch
> routine during early kernel boot. After determining what platform is
> present, various operations specific to that platform occur. This
> includes finalizing setting for the platform late launch and verifying
> that memory protections are in place.
> 
> For TXT, this code also reserves the original compressed kernel setup
> area where the APs were left looping so that this memory cannot be used.
> 
> Signed-off-by: Ross Philipson 

Hi Ross,

a few nits from my side.

> +/*
> + * The TXT heap is too big to map all at once with early_ioremap
> + * so it is done a table at a time.
> + */
> +static void __init *txt_early_get_heap_table(void __iomem *txt, u32 type,
> +  u32 bytes)
> +{
> + u64 base, size, offset = 0;
> + void *heap;
> + int i;
> +
> + if (type > TXT_SINIT_TABLE_MAX)
> + slaunch_txt_reset(txt,
> + "Error invalid table type for early heap walk\n",
> + SL_ERROR_HEAP_WALK);

nit: the indentation should align to the opening '('.

slaunch_txt_reset(txt,
  "Error invalid table type for early heap 
walk\n",
  SL_ERROR_HEAP_WALK);

Likewise in a few other places in this patch.

...

> +static void __init slaunch_txt_reserve_range(u64 base, u64 size)
> +{
> + int type;
> +
> + type = e820__get_entry_type(base, base + size - 1);
> + if (type == E820_TYPE_RAM) {
> + pr_info("memblock reserve base: %llx size: %llx\n", base, size);
> + memblock_reserve(base, size);
> + }
> +}
> +
> +/*
> + * For Intel, certain regions of memory must be marked as reserved by putting
> + * them on the memblock reserved list if they are not already e820 reserved.
> + * This includes:
> + *  - The TXT HEAP
> + *  - The ACM area
> + *  - The TXT private register bank
> + *  - The MDR list sent to the MLE by the ACM (see TXT specification)
> + *  (Normally the above are properly reserved by firmware but if it was not
> + *  done, reserve them now)
> + *  - The AP wake block
> + *  - TPM log external to the TXT heap
> + *
> + * Also if the low PMR doesn't cover all memory < 4G, any RAM regions above
> + * the low PMR must be reservered too.

nit: s/reservered/reserved/

> + */
> +static void __init slaunch_txt_reserve(void __iomem *txt)
> +{
> + struct txt_sinit_memory_descriptor_record *mdr;
> + struct txt_sinit_mle_data *sinit_mle_data;
> + u64 base, size, heap_base, heap_size;
> + u32 mdrnum, mdroffset, mdrslen;
> + u32 field_offset, i;
> + void *mdrs;
> +
> + base = TXT_PRIV_CONFIG_REGS_BASE;
> + size = TXT_PUB_CONFIG_REGS_BASE - TXT_PRIV_CONFIG_REGS_BASE;
> + slaunch_txt_reserve_range(base, size);
> +
> + memcpy_fromio(_base, txt + TXT_CR_HEAP_BASE, sizeof(heap_base));
> + memcpy_fromio(_size, txt + TXT_CR_HEAP_SIZE, sizeof(heap_size));
> + slaunch_txt_reserve_range(heap_base, heap_size);
> +
> + memcpy_fromio(, txt + TXT_CR_SINIT_BASE, sizeof(base));
> + memcpy_fromio(, txt + TXT_CR_SINIT_SIZE, sizeof(size));
> + slaunch_txt_reserve_range(base, size);
> +
> + field_offset = offsetof(struct txt_sinit_mle_data,
> + sinit_vtd_dmar_table_size);
> + sinit_mle_data = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE,
> +   field_offset);
> +
> + mdrnum = sinit_mle_data->num_of_sinit_mdrs;
> + mdroffset = sinit_mle_data->sinit_mdrs_table_offset;
> +
> + txt_early_put_heap_table(sinit_mle_data, field_offset);
> +
> + if (!mdrnum)
> + goto nomdr;
> +
> + mdrslen = mdrnum * sizeof(struct txt_sinit_memory_descriptor_record);
> +
> + mdrs = txt_early_get_heap_table(txt, TXT_SINIT_MLE_DATA_TABLE,
> + mdroffset + mdrslen - 8);
> +
> + mdr = mdrs + mdroffset - 8;
> +
> + for (i = 0; i < mdrnum; i++, mdr++) {
> + /* Spec says some entries can have length 0, ignore them */
> + if (mdr->type > 0 && mdr->length > 0)
> + slaunch_txt_reserve_range(mdr->address, mdr->length);
> + }
> +
> + txt_early_put_heap_table(mdrs, mdroffset + mdrslen - 8);
> +
> +nomdr:
> + slaunch_txt_reserve_range(ap_wake_info.ap_wake_block,
> +   ap_wake_info.ap_wake_block_size);
> +
> + /*
> +  * Earlier checks ensured that the event log was properly situated
> +  * either inside the TXT heap or outside. This is a check to see if the
> +  * event log needs to be reserved. If it is in the TXT heap, it is
> +  * already reserved.
> +  */
> + if (evtlog_addr < heap_base || evtlog_addr > (heap_base + heap_size))
> + slaunch_txt_reserve_range(evtlog_addr, evtlog_size);
> 

Re: [PATCH v6 07/14] x86: Secure Launch kernel early boot stub

2023-05-05 Thread Simon Horman
On Thu, May 04, 2023 at 02:50:16PM +, Ross Philipson wrote:
> The Secure Launch (SL) stub provides the entry point for Intel TXT (and
> later AMD SKINIT) to vector to during the late launch. The symbol
> sl_stub_entry is that entry point and its offset into the kernel is
> conveyed to the launching code using the MLE (Measured Launch
> Environment) header in the structure named mle_header. The offset of the
> MLE header is set in the kernel_info. The routine sl_stub contains the
> very early late launch setup code responsible for setting up the basic
> environment to allow the normal kernel startup_32 code to proceed. It is
> also responsible for properly waking and handling the APs on Intel
> platforms. The routine sl_main which runs after entering 64b mode is
> responsible for measuring configuration and module information before
> it is used like the boot params, the kernel command line, the TXT heap,
> an external initramfs, etc.
> 
> Signed-off-by: Ross Philipson 

...

> diff --git a/arch/x86/boot/compressed/sl_main.c 
> b/arch/x86/boot/compressed/sl_main.c

...

> +static void *evtlog_base;
> +static u32 evtlog_size;
> +static struct txt_heap_event_log_pointer2_1_element *log20_elem;
> +static u32 tpm_log_ver = SL_TPM12_LOG;
> +struct tcg_efi_specid_event_algs tpm_algs[SL_TPM20_MAX_ALGS] = {0};

tpm_algs seems to only be used in this file.
Should it be static?

> +
> +extern u32 sl_cpu_type;
> +extern u32 sl_mle_start;
> +extern struct boot_params *boot_params;
> +
> +static u64 sl_txt_read(u32 reg)

Perhaps reg should have an __iomem annotation.

> +{
> + return readq((void *)(u64)(TXT_PRIV_CONFIG_REGS_BASE + reg));
> +}
> +
> +static void sl_txt_write(u32 reg, u64 val)

Likewise here.

...

> +static void sl_check_pmr_coverage(void *base, u32 size, bool allow_hi)
> +{
> + struct txt_os_sinit_data *os_sinit_data;
> + void *end = base + size;
> + void *txt_heap;
> +
> + if (!(sl_cpu_type & SL_CPU_INTEL))
> + return;
> +
> + txt_heap = (void *)sl_txt_read(TXT_CR_HEAP_BASE);
> + os_sinit_data = txt_os_sinit_data_start(txt_heap);
> +
> + if ((end >= (void *)0x1ULL) &&
> + (base < (void *)0x1ULL))
> + sl_txt_reset(SL_ERROR_REGION_STRADDLE_4GB);
> +
> + /*
> +  * Note that the late stub code validates that the hi PMR covers
> +  * all memory above 4G. At this point the code can only check that
> +  * regions are within the hi PMR but that is sufficient.
> +  */
> + if ((end > (void *)0x1ULL) &&
> + (base >= (void *)0x1ULL)) {
> + if (allow_hi) {
> + if (end >= (void *)(os_sinit_data->vtd_pmr_hi_base +
> +os_sinit_data->vtd_pmr_hi_size))
> + sl_txt_reset(SL_ERROR_BUFFER_BEYOND_PMR);
> + } else
> + sl_txt_reset(SL_ERROR_REGION_ABOVE_4GB);

nit: if any arm of a condition has '{}' then all arms should have them.
 So:

} else {
sl_txt_reset(SL_ERROR_REGION_ABOVE_4GB);
}

Also elsewhere in this patch.

> + }
> +
> + if (end >= (void *)os_sinit_data->vtd_pmr_lo_size)
> + sl_txt_reset(SL_ERROR_BUFFER_BEYOND_PMR);
> +}
> +
> +/*
> + * Some MSRs are modified by the pre-launch code including the MTRRs.
> + * The early MLE code has to restore these values. This code validates
> + * the values after they are measured.
> + */
> +static void sl_txt_validate_msrs(struct txt_os_mle_data *os_mle_data)
> +{
> + struct slr_txt_mtrr_state *saved_bsp_mtrrs;
> + u64 mtrr_caps, mtrr_def_type, mtrr_var;
> + struct slr_entry_intel_info *txt_info;
> + u64 misc_en_msr;
> + u32 vcnt, i;
> +
> + txt_info = (struct slr_entry_intel_info *)os_mle_data->txt_info;
> + saved_bsp_mtrrs = &(txt_info->saved_bsp_mtrrs);

nit: unnecessary parentheses

...

> +static void sl_validate_event_log_buffer(void)
> +{
> + struct txt_os_sinit_data *os_sinit_data;
> + void *txt_heap, *txt_end;
> + void *mle_base, *mle_end;
> + void *evtlog_end;
> +
> + if ((u64)evtlog_size > (LLONG_MAX - (u64)evtlog_base))
> + sl_txt_reset(SL_ERROR_INTEGER_OVERFLOW);
> + evtlog_end = evtlog_base + evtlog_size;
> +
> + txt_heap = (void *)sl_txt_read(TXT_CR_HEAP_BASE);
> + txt_end = txt_heap + sl_txt_read(TXT_CR_HEAP_SIZE);
> + os_sinit_data = txt_os_sinit_data_start(txt_heap);
> +
> + mle_base = (void *)(u64)sl_mle_start;
> + mle_end = mle_base + os_sinit_data->mle_size;
> +
> + /*
> +  * This check is to ensure the event log buffer does not overlap with
> +  * the MLE image.
> +  */
> + if ((evtlog_base >= mle_end) &&
> + (evtlog_end > mle_end))
> + goto pmr_check; /* above */

Ditto.
Also, the if condition could be one line.
Also in several other places in this patch.

> +
> + if ((evtlog_end <= mle_base) 

Re: [PATCH v6 05/14] x86: Secure Launch main header file

2023-05-05 Thread Ross Philipson

On 5/5/23 12:25, Simon Horman wrote:

On Thu, May 04, 2023 at 02:50:14PM +, Ross Philipson wrote:

Introduce the main Secure Launch header file used in the early SL stub
and the early setup code.

Signed-off-by: Ross Philipson 
---
  include/linux/slaunch.h | 513 
  1 file changed, 513 insertions(+)
  create mode 100644 include/linux/slaunch.h

diff --git a/include/linux/slaunch.h b/include/linux/slaunch.h


...


+/* TXTCR_STS status bits */
+#define TXT_SENTER_DONE_STS(1<<0)
+#define TXT_SEXIT_DONE_STS (1<<1)


nit: Please consider using BIT()


Ack



...


+/*
+ * External functions avalailable in mainline kernel.
+ */
+extern void slaunch_setup_txt(void);
+extern u32 slaunch_get_flags(void);
+extern struct sl_ap_wake_info *slaunch_get_ap_wake_info(void);
+extern struct acpi_table_header *slaunch_get_dmar_table(struct 
acpi_table_header *dmar);
+extern void __noreturn slaunch_txt_reset(void __iomem *txt,
+const char *msg, u64 error);
+extern void slaunch_finalize(int do_sexit);


I think that extern should be avoided.
Perhaps these are in a header file that can be included?


Someone in an earlier review asked me to add the externs. The function 
are not in another header, they are in a C module.





+
+#endif /* !__ASSEMBLY */
+
+#else
+
+#define slaunch_setup_txt()do { } while (0)
+#define slaunch_get_flags()0
+#define slaunch_get_dmar_table(d)  (d)
+#define slaunch_finalize(d)do { } while (0)


I think it is usual to use static functions for this purpose.
Usually they end up in header files as static inline functions.


Yea we can switch to using static functions.

Thanks
Ross




+
+#endif /* !IS_ENABLED(CONFIG_SECURE_LAUNCH) */
+
+#endif /* _LINUX_SLAUNCH_H */



___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v6 04/14] x86: Secure Launch Resource Table header file

2023-05-05 Thread Ross Philipson

On 5/5/23 12:22, Simon Horman wrote:

On Thu, May 04, 2023 at 02:50:13PM +, Ross Philipson wrote:

Introduce the Secure Launch Resource Table which forms the formal
interface between the pre and post launch code.

Signed-off-by: Ross Philipson 
---
  include/linux/slr_table.h | 270 ++
  1 file changed, 270 insertions(+)
  create mode 100644 include/linux/slr_table.h

diff --git a/include/linux/slr_table.h b/include/linux/slr_table.h


...


+static inline void *slr_end_of_entrys(struct slr_table *table)
+{
+   return (((void *)table) + table->size);
+}
+
+static inline struct slr_entry_hdr *
+slr_next_entry(struct slr_table *table,
+   struct slr_entry_hdr *curr)


nit: The indentation of the line above doesn't align with the '(' on
  the line before it.


Yea I see there is another one above that is not aligned too. We will 
fix those.


Thanks
Ross


___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v6 02/14] Documentation/x86: Secure Launch kernel documentation

2023-05-05 Thread Ross Philipson

On 5/5/23 12:19, Simon Horman wrote:

On Thu, May 04, 2023 at 02:50:11PM +, Ross Philipson wrote:

Introduce background, overview and configuration/ABI information
for the Secure Launch kernel feature.

Signed-off-by: Daniel P. Smith 
Signed-off-by: Ross Philipson 


Hi Ross and Daniel,

some minor nits from my side.


All good points. We will fix those.

Thanks
Ross




---
  Documentation/security/index.rst   |   1 +
  Documentation/security/launch-integrity/index.rst  |  10 +
  .../security/launch-integrity/principles.rst   | 313 
  .../launch-integrity/secure_launch_details.rst | 564 +
  .../launch-integrity/secure_launch_overview.rst| 220 
  5 files changed, 1108 insertions(+)
  create mode 100644 Documentation/security/launch-integrity/index.rst
  create mode 100644 Documentation/security/launch-integrity/principles.rst
  create mode 100644 
Documentation/security/launch-integrity/secure_launch_details.rst
  create mode 100644 
Documentation/security/launch-integrity/secure_launch_overview.rst

diff --git a/Documentation/security/index.rst b/Documentation/security/index.rst
index 6ed8d2f..fade37e 100644
--- a/Documentation/security/index.rst
+++ b/Documentation/security/index.rst
@@ -18,3 +18,4 @@ Security Documentation
 digsig
 landlock
 secrets/index
+   launch-integrity/index
diff --git a/Documentation/security/launch-integrity/index.rst 
b/Documentation/security/launch-integrity/index.rst
new file mode 100644
index 000..28eed91d
--- /dev/null
+++ b/Documentation/security/launch-integrity/index.rst
@@ -0,0 +1,10 @@


I believe an SPDX tag should go at the top of each .rst file.


+=
+System Launch Integrity documentation
+=
+
+.. toctree::
+
+   principles
+   secure_launch_overview
+   secure_launch_details
+
diff --git a/Documentation/security/launch-integrity/principles.rst 
b/Documentation/security/launch-integrity/principles.rst
new file mode 100644
index 000..73cf063
--- /dev/null
+++ b/Documentation/security/launch-integrity/principles.rst
@@ -0,0 +1,313 @@
+===
+System Launch Integrity
+===
+
+This document serves to establish a common understanding of what is system
+launch, the integrity concern for system launch, and why using a Root of Trust
+(RoT) from a Dynamic Launch may be desired. Through out this document
+terminology from the Trusted Computing Group (TCG) and National Institue for


s/Institue/Institute/

...


+Trust Chains
+
+
+Bulding upon the understanding of security mechanisms to establish load-time


s/Bulding/Building/

...


diff --git a/Documentation/security/launch-integrity/secure_launch_details.rst 
b/Documentation/security/launch-integrity/secure_launch_details.rst


...


+Secure Launch Resource Table
+
+
+The Secure Launch Resource Table (SLRT) is a platform-agnostic, standard format
+for providing information for the pre-launch environment and to pass
+information to the post-launch environment. The table is populated by one or
+more bootloaders in the boot chain and used by Secure Launch on how to setup
+the environment during post-launch. The details for the SLRT are documented
+in the TrenchBoot Secure Launch Specifcation [3]_.


s/Specifcation/Specification/

...



___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v6 06/14] x86: Add early SHA support for Secure Launch early measurements

2023-05-05 Thread Simon Horman
On Thu, May 04, 2023 at 02:50:15PM +, Ross Philipson wrote:
> From: "Daniel P. Smith" 
> 
> The SHA algorithms are necessary to measure configuration information into
> the TPM as early as possible before using the values. This implementation
> uses the established approach of #including the SHA libraries directly in
> the code since the compressed kernel is not uncompressed at this point.
> 
> The SHA code here has its origins in the code from the main kernel:
> 
> commit c4d5b9ffa31f ("crypto: sha1 - implement base layer for SHA-1")
> 
> That code could not be pulled directly into the setup portion of the
> compressed kernel because of other dependencies it pulls in. The result
> is this is a modified copy of that code that still leverages the core
> SHA algorithms.
> 
> Signed-off-by: Daniel P. Smith 
> Signed-off-by: Ross Philipson 
> ---
>  arch/x86/boot/compressed/Makefile   |  2 +
>  arch/x86/boot/compressed/early_sha1.c   | 97 
> +
>  arch/x86/boot/compressed/early_sha1.h   | 17 ++
>  arch/x86/boot/compressed/early_sha256.c |  7 +++
>  lib/crypto/sha1.c   |  4 ++
>  lib/crypto/sha256.c |  8 +++
>  6 files changed, 135 insertions(+)
>  create mode 100644 arch/x86/boot/compressed/early_sha1.c
>  create mode 100644 arch/x86/boot/compressed/early_sha1.h
>  create mode 100644 arch/x86/boot/compressed/early_sha256.c
> 
> diff --git a/arch/x86/boot/compressed/Makefile 
> b/arch/x86/boot/compressed/Makefile
> index 6b6cfe6..1d327d4 100644
> --- a/arch/x86/boot/compressed/Makefile
> +++ b/arch/x86/boot/compressed/Makefile
> @@ -112,6 +112,8 @@ vmlinux-objs-$(CONFIG_EFI) += $(obj)/efi.o
>  vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_mixed.o
>  vmlinux-objs-$(CONFIG_EFI_STUB) += 
> $(objtree)/drivers/firmware/efi/libstub/lib.a
>  
> +vmlinux-objs-$(CONFIG_SECURE_LAUNCH) += $(obj)/early_sha1.o 
> $(obj)/early_sha256.o
> +
>  $(obj)/vmlinux: $(vmlinux-objs-y) FORCE
>   $(call if_changed,ld)
>  
> diff --git a/arch/x86/boot/compressed/early_sha1.c 
> b/arch/x86/boot/compressed/early_sha1.c
> new file mode 100644
> index 000..524ec23
> --- /dev/null
> +++ b/arch/x86/boot/compressed/early_sha1.c
> @@ -0,0 +1,97 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Copyright (c) 2022 Apertus Solutions, LLC.
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "early_sha1.h"
> +
> +#define SHA1_DISABLE_EXPORT

Hi Ross,

I could be mistaken, but it seems to me that *_DISABLE_EXPORT
is a mechanism of this patch to disable exporting symbols
(use of EXPORT_SYMBOL), at compile time.

If so, this doesn't strike me as something that should be part of upstream
kernel code.  Could you consider dropping this part of the patch?

...

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v6 05/14] x86: Secure Launch main header file

2023-05-05 Thread Simon Horman
On Thu, May 04, 2023 at 02:50:14PM +, Ross Philipson wrote:
> Introduce the main Secure Launch header file used in the early SL stub
> and the early setup code.
> 
> Signed-off-by: Ross Philipson 
> ---
>  include/linux/slaunch.h | 513 
> 
>  1 file changed, 513 insertions(+)
>  create mode 100644 include/linux/slaunch.h
> 
> diff --git a/include/linux/slaunch.h b/include/linux/slaunch.h

...

> +/* TXTCR_STS status bits */
> +#define TXT_SENTER_DONE_STS  (1<<0)
> +#define TXT_SEXIT_DONE_STS   (1<<1)

nit: Please consider using BIT()

...

> +/*
> + * External functions avalailable in mainline kernel.
> + */
> +extern void slaunch_setup_txt(void);
> +extern u32 slaunch_get_flags(void);
> +extern struct sl_ap_wake_info *slaunch_get_ap_wake_info(void);
> +extern struct acpi_table_header *slaunch_get_dmar_table(struct 
> acpi_table_header *dmar);
> +extern void __noreturn slaunch_txt_reset(void __iomem *txt,
> +  const char *msg, u64 error);
> +extern void slaunch_finalize(int do_sexit);

I think that extern should be avoided.
Perhaps these are in a header file that can be included?

> +
> +#endif /* !__ASSEMBLY */
> +
> +#else
> +
> +#define slaunch_setup_txt()  do { } while (0)
> +#define slaunch_get_flags()  0
> +#define slaunch_get_dmar_table(d)(d)
> +#define slaunch_finalize(d)  do { } while (0)

I think it is usual to use static functions for this purpose.
Usually they end up in header files as static inline functions.

> +
> +#endif /* !IS_ENABLED(CONFIG_SECURE_LAUNCH) */
> +
> +#endif /* _LINUX_SLAUNCH_H */

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v6 04/14] x86: Secure Launch Resource Table header file

2023-05-05 Thread Simon Horman
On Thu, May 04, 2023 at 02:50:13PM +, Ross Philipson wrote:
> Introduce the Secure Launch Resource Table which forms the formal
> interface between the pre and post launch code.
> 
> Signed-off-by: Ross Philipson 
> ---
>  include/linux/slr_table.h | 270 
> ++
>  1 file changed, 270 insertions(+)
>  create mode 100644 include/linux/slr_table.h
> 
> diff --git a/include/linux/slr_table.h b/include/linux/slr_table.h

...

> +static inline void *slr_end_of_entrys(struct slr_table *table)
> +{
> + return (((void *)table) + table->size);
> +}
> +
> +static inline struct slr_entry_hdr *
> +slr_next_entry(struct slr_table *table,
> + struct slr_entry_hdr *curr)

nit: The indentation of the line above doesn't align with the '(' on
 the line before it.

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v6 02/14] Documentation/x86: Secure Launch kernel documentation

2023-05-05 Thread Simon Horman
On Thu, May 04, 2023 at 02:50:11PM +, Ross Philipson wrote:
> Introduce background, overview and configuration/ABI information
> for the Secure Launch kernel feature.
> 
> Signed-off-by: Daniel P. Smith 
> Signed-off-by: Ross Philipson 

Hi Ross and Daniel,

some minor nits from my side.

> ---
>  Documentation/security/index.rst   |   1 +
>  Documentation/security/launch-integrity/index.rst  |  10 +
>  .../security/launch-integrity/principles.rst   | 313 
>  .../launch-integrity/secure_launch_details.rst | 564 
> +
>  .../launch-integrity/secure_launch_overview.rst| 220 
>  5 files changed, 1108 insertions(+)
>  create mode 100644 Documentation/security/launch-integrity/index.rst
>  create mode 100644 Documentation/security/launch-integrity/principles.rst
>  create mode 100644 
> Documentation/security/launch-integrity/secure_launch_details.rst
>  create mode 100644 
> Documentation/security/launch-integrity/secure_launch_overview.rst
> 
> diff --git a/Documentation/security/index.rst 
> b/Documentation/security/index.rst
> index 6ed8d2f..fade37e 100644
> --- a/Documentation/security/index.rst
> +++ b/Documentation/security/index.rst
> @@ -18,3 +18,4 @@ Security Documentation
> digsig
> landlock
> secrets/index
> +   launch-integrity/index
> diff --git a/Documentation/security/launch-integrity/index.rst 
> b/Documentation/security/launch-integrity/index.rst
> new file mode 100644
> index 000..28eed91d
> --- /dev/null
> +++ b/Documentation/security/launch-integrity/index.rst
> @@ -0,0 +1,10 @@

I believe an SPDX tag should go at the top of each .rst file.

> +=
> +System Launch Integrity documentation
> +=
> +
> +.. toctree::
> +
> +   principles
> +   secure_launch_overview
> +   secure_launch_details
> +
> diff --git a/Documentation/security/launch-integrity/principles.rst 
> b/Documentation/security/launch-integrity/principles.rst
> new file mode 100644
> index 000..73cf063
> --- /dev/null
> +++ b/Documentation/security/launch-integrity/principles.rst
> @@ -0,0 +1,313 @@
> +===
> +System Launch Integrity
> +===
> +
> +This document serves to establish a common understanding of what is system
> +launch, the integrity concern for system launch, and why using a Root of 
> Trust
> +(RoT) from a Dynamic Launch may be desired. Through out this document
> +terminology from the Trusted Computing Group (TCG) and National Institue for

s/Institue/Institute/

...

> +Trust Chains
> +
> +
> +Bulding upon the understanding of security mechanisms to establish load-time

s/Bulding/Building/

...

> diff --git 
> a/Documentation/security/launch-integrity/secure_launch_details.rst 
> b/Documentation/security/launch-integrity/secure_launch_details.rst

...

> +Secure Launch Resource Table
> +
> +
> +The Secure Launch Resource Table (SLRT) is a platform-agnostic, standard 
> format
> +for providing information for the pre-launch environment and to pass
> +information to the post-launch environment. The table is populated by one or
> +more bootloaders in the boot chain and used by Secure Launch on how to setup
> +the environment during post-launch. The details for the SLRT are documented
> +in the TrenchBoot Secure Launch Specifcation [3]_.

s/Specifcation/Specification/

...

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH v6 00/14] x86: Trenchboot secure dynamic launch Linux kernel support

2023-05-05 Thread Ross Philipson

On 5/5/23 04:39, Bagas Sanjaya wrote:

On Thu, May 04, 2023 at 02:50:09PM +, Ross Philipson wrote:

This patchset provides detailed documentation of DRTM, the approach used for
adding the capbility, and relevant API/ABI documentation. In addition to the
documentation the patch set introduces Intel TXT support as the first platform
for Linux Secure Launch.


I'd like to apply this series, but on what commit it is based on? I
don't see any branch containing this series version in trenchboot tree
[1].


Sorry about that. In the future I will include a base-commit field. It 
is based off of torvolds/master as of 5/1/2023. The branch where the 
patches came from is now pushed to the TrenchBoot repository here:


https://github.com/TrenchBoot/linux/tree/linux-sl-master-5-1-23-v6

Thanks
Ross



Thanks.

[1]: https://github.com/TrenchBoot/linux




___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH 3/5] arm64: change the prototype of image probe function

2023-05-05 Thread Simon Horman
On Fri, May 05, 2023 at 10:54:35AM +0800, Pingfan Liu wrote:
> Changing the aarch64 probe's prototype  from
>   typedef int (probe_t)(const char *kernel_buf, off_t kernel_size);
> to
>   typedef int (probe_t)(const char *kernel_buf, off_t kernel_size, struct 
> kexec_info *info);
> 
> Later, info can be used to return both the file descriptor and parsed kernel
> buffer. The fd is passed to sys_kexec_file_load, and the parsed kernel
> buffer is used by image's load function.
> 
> Signed-off-by: Pingfan Liu 

Hi Pingfan,

I am seeing a build failure on ARM (32bit).

  138 | int zImage_arm_probe(const char *UNUSED(buf), off_t UNUSED(len))
  | ^~~~
In file included from ../../kexec/arch/arm/kexec-zImage-arm.c:21:
../../kexec/arch/arm/kexec-arm.h:12:5: note: previous declaration of 
‘zImage_arm_probe’ was here
   12 | int zImage_arm_probe(const char *buf, off_t len, struct kexec_info 
*info);
  | ^~~~
make[1]: *** [Makefile:124: kexec/arch/arm/kexec-zImage-arm.o] Error 1
make[1]: *** Waiting for unfinished jobs
make[1]: Leaving directory 
'/home/runner/work/kexec-tools/kexec-tools/kexec-tools-2.0.26.git/_build/sub'
make: *** [Makefile:276: distcheck] Error 2
Error: Process completed with exit code 2.

Link: 
https://github.com/horms/kexec-tools/actions/runs/4895124719/jobs/8740272103

...

> diff --git a/kexec/kexec.h b/kexec/kexec.h
> index 0d820ad..6e8430e 100644
> --- a/kexec/kexec.h
> +++ b/kexec/kexec.h
> @@ -191,7 +191,13 @@ unsigned long locate_hole(struct kexec_info *info,
>   unsigned long hole_min, unsigned long hole_max,
>   int hole_end);
>  
> +#ifndef __aarch64__
>  typedef int (probe_t)(const char *kernel_buf, off_t kernel_size);
> +#else
> +typedef int (probe_t)(const char *kern_fname, off_t kernel_size,
> + struct kexec_info *info);
> +#endif
> +

This seems kind of unfortunate.
Could we update the prototype for all architectures?

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec


Re: [PATCH 1/5] kexec: Adding missing free for kernel_buf

2023-05-05 Thread Simon Horman
On Fri, May 05, 2023 at 10:54:33AM +0800, Pingfan Liu wrote:
> slurp_decompress_file() allocates memory but nowhere to free it.
> Adding that missing free.

Hi Pingfan,

There seem to be:

a) other places where slurp_decompress_file() is called and;
b) other places where do_kexec_file_load() returns.

I wonder if a more comprehensive fix appropriate.

> 
> Signed-off-by: Pingfan Liu 
> To: kexec@lists.infradead.org
> Cc: ho...@verge.net.au
> Cc: a...@kernel.org
> Cc: jeremy.lin...@arm.com
> ---
>  kexec/kexec.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/kexec/kexec.c b/kexec/kexec.c
> index 36bb2ad..614cd1d 100644
> --- a/kexec/kexec.c
> +++ b/kexec/kexec.c
> @@ -1379,6 +1379,7 @@ static int do_kexec_file_load(int fileind, int argc, 
> char **argv,
>   }
>   }
>  
> + free(kernel_buf);
>   close(kernel_fd);
>   return ret;
>  }
> -- 
> 2.31.1
> 

___
kexec mailing list
kexec@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/kexec