Re: [GRUB PATCH RFC 00/18] i386: Intel TXT secure launcher

2020-06-01 Thread Daniel P. Smith
On 6/1/20 8:49 PM, Andy Lutomirski wrote:
> 
> 
>> On Jun 1, 2020, at 5:14 PM, Daniel P. Smith  
>> wrote:
>>
>> On 6/1/20 3:39 PM, Andy Lutomirski wrote:
> .
>>
>> In other words, the log for the relaunch to attest what is currently
>> running is really no less useful than using the first launch log to
>> attest to the what was running in the first launch.
>>
> 
> Maybe it would help if you give some examples of what’s actually in this log 
> and why anyone, Linux or otherwise, cares for any purpose other than 
> debugging.  We’re talking about a log written by something like GRUB, right?  
> If so, I’m imagining things like:
> 
> GRUB: loading such-and-such module
> GRUB: loading the other module
> GRUB: loading Linux at /boot/vmlinuz-whatever
> GRUB: about to do the DRTM launch. Bye-bye.
> 
> This is surely useful for debugging.  But, if I understand your security 
> model correctly, it’s untrustworthy in the sense that this all comes from 
> before the DRTM launch and it could have been tampered with by SMM code or 
> even just a malicious USB stick.  Or even a malicious compromised kernel on 
> the same machine. So you could hash this log into a PCR, but I don’t see what 
> you’ve accomplished by doing so.
> 
> Or have I misunderstood what this log is?  Perhaps you’re talking about 
> something else entirely.
> 

Oh I see! Yes we are discussing two different logs and yes there are two
"logs" in play here. The start of this thread by Lukasz was on the TPM
Event log. This is the log that is a record of TPM events for the DRTM
chain, you can see the equivalent for SRTM with `cat
/sys/kernel/security/tpm0/ascii_bios_measurements`(provided you system
configuration is set up for SRTM). The second log, which for lack of a
better name is the "debug log", is what you are referring to and is in
another proposal that just recently came out, "[BOOTLOADER SPECIFICATION
RFC] The bootloader log format for TrenchBoot"[1].

You are correct, the "debug log" is just filled with messages from
components during the DRTM launch chain. While relaunch is being kept in
mind as we work through getting first launch complete, not all aspects
have been considered. What happens to the debug log on relaunch has no
security relevance, as you called out earlier, and at this point I would
say is an exercise for those integrating relaunch when it becomes
available. I would expect them to treat it no different than the kmesg
and syslog buffers on kexec/reboot, some may opt to kept a historical
record and others will just let it disappear into the ether.

Apologies that we got disconnected, I hope we are in sync now.

[1] https://lists.gnu.org/archive/html/grub-devel/2020-05/msg00223.html


___
Grub-devel mailing list
Grub-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/grub-devel


Re: [GRUB PATCH RFC 00/18] i386: Intel TXT secure launcher

2020-06-01 Thread Andy Lutomirski


> On Jun 1, 2020, at 5:14 PM, Daniel P. Smith  
> wrote:
> 
> On 6/1/20 3:39 PM, Andy Lutomirski wrote:
 .
> 
> In other words, the log for the relaunch to attest what is currently
> running is really no less useful than using the first launch log to
> attest to the what was running in the first launch.
> 

Maybe it would help if you give some examples of what’s actually in this log 
and why anyone, Linux or otherwise, cares for any purpose other than debugging. 
 We’re talking about a log written by something like GRUB, right?  If so, I’m 
imagining things like:

GRUB: loading such-and-such module
GRUB: loading the other module
GRUB: loading Linux at /boot/vmlinuz-whatever
GRUB: about to do the DRTM launch. Bye-bye.

This is surely useful for debugging.  But, if I understand your security model 
correctly, it’s untrustworthy in the sense that this all comes from before the 
DRTM launch and it could have been tampered with by SMM code or even just a 
malicious USB stick.  Or even a malicious compromised kernel on the same 
machine. So you could hash this log into a PCR, but I don’t see what you’ve 
accomplished by doing so.

Or have I misunderstood what this log is?  Perhaps you’re talking about 
something else entirely.
___
Grub-devel mailing list
Grub-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/grub-devel


Re: [GRUB PATCH RFC 00/18] i386: Intel TXT secure launcher

2020-06-01 Thread Daniel P. Smith
On 6/1/20 3:39 PM, Andy Lutomirski wrote:
> 
>> On Jun 1, 2020, at 10:56 AM, Daniel P. Smith  
>> wrote:
>>
>> On 6/1/20 12:51 PM, Andy Lutomirski wrote:
 On Mon, Jun 1, 2020 at 8:33 AM Daniel P. Smith
  wrote:

 On 5/7/20 7:06 AM, Daniel Kiper wrote:
> Hi Łukasz,
>
> On Tue, May 05, 2020 at 04:38:02PM +0200, Lukasz Hawrylko wrote:
>>> On Tue, 2020-05-05 at 01:21 +0200, Daniel Kiper wrote:

 ...

>> In OS-MLE table there is a buffer for TPM event log, however I see that
>> you are not using it, but instead allocate space somewhere in the
>
> I think that this part requires more discussion. In my opinion we should
> have this region dynamically allocated because it gives us more 
> flexibility.
> Of course there is a question about the size of this buffer too. I am
> not sure about that because I have not checked yet how many log entries
> are created by the SINIT ACM. Though probably it should not be large...
>
>> memory. I am just wondering if, from security perspective, it will be
>> better to use memory from TXT heap for event log, like we do in TBOOT.
>
> Appendix F, TPM Event Log, has following sentence: There are no
> requirements for event log to be in DMA protected memory – SINIT will
> not enforce it.
>
> I was thinking about it and it seems to me that the TPM event log does
> not require any special protections. Any changes in it can be quickly
> detected by comparing hashes with the TPM PCRs. Does not it?
>

 I think it would be beneficial to consider the following in deciding
 where the log is placed. There are two areas of attack/manipulation that
 need to be considered. The first area is the log contents itself, which
 as Daniel has pointed out, the log contents do not really need to be
 protected from tampering as that would/should be detected during
 verification by the attestor. The second area that we need to consider
 is the log descriptors themselves. If these are in an area that can be
 manipulated, it is an opportunity for an attacker to attempt to
 influence the ACM's execution. For a little perspective, the ACM
 executes from CRAM to take the most possible precaution to ensure that
 it cannot be tampered with during execution. This is very important
 since it runs a CPU mode (ACM Mode) that I would consider to be higher
 (or lower depending on how you view it) than SMM. As a result, the txt
 heap is also included in what is mapped into CRAM. If the event log is
 place in the heap, this ensures that the ACM is not using memory outside
 of CRAM during execution. Now as Daniel has pointed out, the down side
 to this is that it greatly restricts the log size and can only be
 managed by a combination of limiting the number of events and
 restricting what content is carried in the event data field.
>>>
>>> Can you clarify what the actual flow of control is?  If I had to guess, 
>>> it's:
>>>
>>> GRUB (or other bootloader) writes log.
>>>
>>> GRUB transfers control to the ACM.  At this point, GRUB is done
>>> running and GRUB code will not run again.
>>>
>>> ACM validates system configuration and updates TPM state using magic
>>> privileged TPM access.
>>>
>>> ACM transfers control to the shiny new Linux secure launch entry point
>>>
>>> Maybe this is right, and maybe this is wrong.  But I have some
>>> questions about this whole setup.  Is the ACM code going to inspect
>>> this log at all?  If so, why?  Who supplies the ACM code?  If the ACM
>>> can be attacked by putting its inputs (e.g. this log) in the wrong
>>> place in memory, why should this be considered anything other than a
>>> bug in the ACM?
>>
>> There is a lot behind that, so to get a complete detail of the event
>> sequence I would recommend looking at Section Vol. 2D 6.2.3 (pg Vol. 2D
>> 6-5/ pdf pg 2531), 6.3 GETSEC[ENTERACCS](pg 6-10 Vol. 2D/pdf pg 2546),
>> and 6.3 GETSEC[SENTER](pg Vol. 2D 6-21/pdf pg 2557) in the Intel SDM[1].
>> Section 6.2.3 gives a slightly detailed overview. Section
>> GETSEC[ENTERACCS] details the requirements/procedures for entering AC
>> execution mode and ACRAM (Authenticated CRAM) and section GETSEC[SENTER]
>> will detail requirements/procedures for SENTER.
>>
>> To answer you additional questions I would say if you look at all the
>> work that goes into protecting the ACM execution, why would you want to
>> then turn around and have it use memory outside of the protected region.
>> On the other hand, you are right, if the Developer's Guide says it
>> doesn't need to be protected and someone somehow finds a way to cause a
>> failure in the ACM through the use of a log outside of CRAM, then
>> rightfully that is a bug in the ACM. This is why I asked about making it
>> configurable, paranoid people could set the configuration to use the
>> heap and all others could just use an external 

Re: [GRUB PATCH RFC 00/18] i386: Intel TXT secure launcher

2020-06-01 Thread Andy Lutomirski

> On Jun 1, 2020, at 10:56 AM, Daniel P. Smith  
> wrote:
> 
> On 6/1/20 12:51 PM, Andy Lutomirski wrote:
>>> On Mon, Jun 1, 2020 at 8:33 AM Daniel P. Smith
>>>  wrote:
>>> 
>>> On 5/7/20 7:06 AM, Daniel Kiper wrote:
 Hi Łukasz,
 
 On Tue, May 05, 2020 at 04:38:02PM +0200, Lukasz Hawrylko wrote:
>> On Tue, 2020-05-05 at 01:21 +0200, Daniel Kiper wrote:
>>> 
>>> ...
>>> 
> In OS-MLE table there is a buffer for TPM event log, however I see that
> you are not using it, but instead allocate space somewhere in the
 
 I think that this part requires more discussion. In my opinion we should
 have this region dynamically allocated because it gives us more 
 flexibility.
 Of course there is a question about the size of this buffer too. I am
 not sure about that because I have not checked yet how many log entries
 are created by the SINIT ACM. Though probably it should not be large...
 
> memory. I am just wondering if, from security perspective, it will be
> better to use memory from TXT heap for event log, like we do in TBOOT.
 
 Appendix F, TPM Event Log, has following sentence: There are no
 requirements for event log to be in DMA protected memory – SINIT will
 not enforce it.
 
 I was thinking about it and it seems to me that the TPM event log does
 not require any special protections. Any changes in it can be quickly
 detected by comparing hashes with the TPM PCRs. Does not it?
 
>>> 
>>> I think it would be beneficial to consider the following in deciding
>>> where the log is placed. There are two areas of attack/manipulation that
>>> need to be considered. The first area is the log contents itself, which
>>> as Daniel has pointed out, the log contents do not really need to be
>>> protected from tampering as that would/should be detected during
>>> verification by the attestor. The second area that we need to consider
>>> is the log descriptors themselves. If these are in an area that can be
>>> manipulated, it is an opportunity for an attacker to attempt to
>>> influence the ACM's execution. For a little perspective, the ACM
>>> executes from CRAM to take the most possible precaution to ensure that
>>> it cannot be tampered with during execution. This is very important
>>> since it runs a CPU mode (ACM Mode) that I would consider to be higher
>>> (or lower depending on how you view it) than SMM. As a result, the txt
>>> heap is also included in what is mapped into CRAM. If the event log is
>>> place in the heap, this ensures that the ACM is not using memory outside
>>> of CRAM during execution. Now as Daniel has pointed out, the down side
>>> to this is that it greatly restricts the log size and can only be
>>> managed by a combination of limiting the number of events and
>>> restricting what content is carried in the event data field.
>> 
>> Can you clarify what the actual flow of control is?  If I had to guess, it's:
>> 
>> GRUB (or other bootloader) writes log.
>> 
>> GRUB transfers control to the ACM.  At this point, GRUB is done
>> running and GRUB code will not run again.
>> 
>> ACM validates system configuration and updates TPM state using magic
>> privileged TPM access.
>> 
>> ACM transfers control to the shiny new Linux secure launch entry point
>> 
>> Maybe this is right, and maybe this is wrong.  But I have some
>> questions about this whole setup.  Is the ACM code going to inspect
>> this log at all?  If so, why?  Who supplies the ACM code?  If the ACM
>> can be attacked by putting its inputs (e.g. this log) in the wrong
>> place in memory, why should this be considered anything other than a
>> bug in the ACM?
> 
> There is a lot behind that, so to get a complete detail of the event
> sequence I would recommend looking at Section Vol. 2D 6.2.3 (pg Vol. 2D
> 6-5/ pdf pg 2531), 6.3 GETSEC[ENTERACCS](pg 6-10 Vol. 2D/pdf pg 2546),
> and 6.3 GETSEC[SENTER](pg Vol. 2D 6-21/pdf pg 2557) in the Intel SDM[1].
> Section 6.2.3 gives a slightly detailed overview. Section
> GETSEC[ENTERACCS] details the requirements/procedures for entering AC
> execution mode and ACRAM (Authenticated CRAM) and section GETSEC[SENTER]
> will detail requirements/procedures for SENTER.
> 
> To answer you additional questions I would say if you look at all the
> work that goes into protecting the ACM execution, why would you want to
> then turn around and have it use memory outside of the protected region.
> On the other hand, you are right, if the Developer's Guide says it
> doesn't need to be protected and someone somehow finds a way to cause a
> failure in the ACM through the use of a log outside of CRAM, then
> rightfully that is a bug in the ACM. This is why I asked about making it
> configurable, paranoid people could set the configuration to use the
> heap and all others could just use an external location.

And this provides no protection whatsoever to paranoid people AFAICS, unless 
the location gets hashed before any 

[RFC PATCH 3/3] efi: mm: implement runtime addition of pages

2020-06-01 Thread Patrick Steinhardt
Adjust the interface of `grub_efi_mm_add_regions ()` to take a set of
`GRUB_MM_REGION_*` flags, which most notably is currently only the
`CONSECUTVE` flag. This allows us to set the function up as callback for
the memory subsystem and have it call out to us in case there's not
enough pages available in the current heap.

Signed-off-by: Patrick Steinhardt 
---
 grub-core/kern/efi/mm.c | 15 +++
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/grub-core/kern/efi/mm.c b/grub-core/kern/efi/mm.c
index 76f0f6a0f..092097355 100644
--- a/grub-core/kern/efi/mm.c
+++ b/grub-core/kern/efi/mm.c
@@ -468,7 +468,8 @@ static void
 add_memory_regions (grub_efi_memory_descriptor_t *memory_map,
grub_efi_uintn_t desc_size,
grub_efi_memory_descriptor_t *memory_map_end,
-   grub_efi_uint64_t required_pages)
+   grub_efi_uint64_t required_pages,
+   char consecutive)
 {
   grub_efi_memory_descriptor_t *desc;
 
@@ -482,6 +483,10 @@ add_memory_regions (grub_efi_memory_descriptor_t 
*memory_map,
 
   start = desc->physical_start;
   pages = desc->num_pages;
+
+  if (pages < required_pages && consecutive)
+   continue;
+
   if (pages > required_pages)
{
  start += PAGES_TO_BYTES (pages - required_pages);
@@ -543,7 +548,7 @@ print_memory_map (grub_efi_memory_descriptor_t *memory_map,
 #endif
 
 static grub_err_t
-grub_efi_mm_add_regions (grub_efi_uint64_t required_bytes)
+grub_efi_mm_add_regions (grub_efi_uint64_t required_bytes, unsigned flags)
 {
   grub_efi_memory_descriptor_t *memory_map;
   grub_efi_memory_descriptor_t *memory_map_end;
@@ -597,7 +602,8 @@ grub_efi_mm_add_regions (grub_efi_uint64_t required_bytes)
   /* Allocate memory regions for GRUB's memory management.  */
   add_memory_regions (filtered_memory_map, desc_size,
  filtered_memory_map_end,
- BYTES_TO_PAGES (required_bytes));
+ BYTES_TO_PAGES (required_bytes),
+ flags & GRUB_MM_REGION_CONSECUTIVE);
 
 #if 0
   /* For debug.  */
@@ -622,8 +628,9 @@ grub_efi_mm_add_regions (grub_efi_uint64_t required_bytes)
 void
 grub_efi_mm_init (void)
 {
-  if (grub_efi_mm_add_regions (DEFAULT_HEAP_SIZE) != GRUB_ERR_NONE)
+  if (grub_efi_mm_add_regions (DEFAULT_HEAP_SIZE, 0) != GRUB_ERR_NONE)
 grub_fatal (grub_errmsg);
+  grub_mm_region_fn = grub_efi_mm_add_regions;
 }
 
 #if defined (__aarch64__) || defined (__arm__) || defined (__riscv)
-- 
2.26.2



signature.asc
Description: PGP signature
___
Grub-devel mailing list
Grub-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/grub-devel


[RFC PATCH 1/3] mm: allow dynamically requesting additional memory regions

2020-06-01 Thread Patrick Steinhardt
Currently, all platforms will set up their heap on initialization of the
platform code. While this works mostly fine, it poses some limitations
on memory management on us. Most notably, allocating big chunks of
memory in the gigabyte range would require us to pre-request this many
bytes from the firmware and add it to the heap from the beginning on
some platforms like EFI. As this isn't needed for most configurations,
it is inefficient and may even negatively impact some usecases when
chainloading. Nonetheless, allocating big chunks of memory is required
sometimes, where one example is the upcoming support for the Argon2 key
derival function in LUKS2.

In order to avoid pre-allocating big chunks of memory, this commit
implements a runtime mechanism to add more pages to the system. When
a given allocation cannot be currently be satisfied, we'll call a given
callback set up by the platform's own memory management subsystem,
asking it to add a memory area with at least `n` bytes. If this
succeeds, we retry searching for a valid memory region, which should now
succeed.

Signed-off-by: Patrick Steinhardt 
---
 grub-core/kern/mm.c | 11 ++-
 include/grub/mm.h   | 13 +
 2 files changed, 23 insertions(+), 1 deletion(-)

diff --git a/grub-core/kern/mm.c b/grub-core/kern/mm.c
index ee88ff611..e4badd104 100644
--- a/grub-core/kern/mm.c
+++ b/grub-core/kern/mm.c
@@ -79,6 +79,7 @@
 
 
 grub_mm_region_t grub_mm_base;
+grub_mm_region_func_t grub_mm_region_fn;
 
 /* Get a header from the pointer PTR, and set *P and *R to a pointer
to the header and a pointer to its region, respectively. PTR must
@@ -358,8 +359,16 @@ grub_memalign (grub_size_t align, grub_size_t size)
   count++;
   goto again;
 
-#if 0
 case 1:
+  /* Request additional pages.  */
+  count++;
+
+  if (grub_mm_region_fn && grub_mm_region_fn (size, 
GRUB_MM_REGION_CONSECUTIVE) == GRUB_ERR_NONE)
+   goto again;
+
+  /* fallthrough */
+#if 0
+case 2:
   /* Unload unneeded modules.  */
   grub_dl_unload_unneeded ();
   count++;
diff --git a/include/grub/mm.h b/include/grub/mm.h
index 28e2e53eb..227e8b54c 100644
--- a/include/grub/mm.h
+++ b/include/grub/mm.h
@@ -20,6 +20,7 @@
 #ifndef GRUB_MM_H
 #define GRUB_MM_H  1
 
+#include 
 #include 
 #include 
 #include 
@@ -28,6 +29,18 @@
 # define NULL  ((void *) 0)
 #endif
 
+enum {
+  GRUB_MM_REGION_CONSECUTIVE = 1 << 0,
+};
+
+/* Function used to request memory regions of `grub_size_t` bytes. The second
+ * parameter is a bitfield of `GRUB_MM_REGION` flags.  */
+typedef grub_err_t (*grub_mm_region_func_t) (grub_size_t, unsigned);
+
+/* Set this function pointer to enable adding memory-regions at runtime in case
+ * a memory allocation cannot be satisfied with existing regions.  */
+extern grub_mm_region_func_t EXPORT_VAR(grub_mm_region_fn);
+
 void grub_mm_init_region (void *addr, grub_size_t size);
 void *EXPORT_FUNC(grub_malloc) (grub_size_t size);
 void *EXPORT_FUNC(grub_zalloc) (grub_size_t size);
-- 
2.26.2



signature.asc
Description: PGP signature
___
Grub-devel mailing list
Grub-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/grub-devel


[RFC PATCH 2/3] efi: mm: extract function to add memory regions

2020-06-01 Thread Patrick Steinhardt
In preparation of support for runtime-allocating additional memory
region, this patch extracts the function to retrieve the EFI memory map
and add a subset of it to GRUB's own memory regions.

Note that this commit also changes how many bytes we request by default.
Previously, we would've tried to allocate a quarter of available system
memory, bounded by a minimum/maximum value. As we're about to implement
runtime allocation of memory, we now instead always request the minimum
amount of bytes and let the memory allocator call out to our callback.

Signed-off-by: Patrick Steinhardt 
---
 grub-core/kern/efi/mm.c | 57 ++---
 1 file changed, 19 insertions(+), 38 deletions(-)

diff --git a/grub-core/kern/efi/mm.c b/grub-core/kern/efi/mm.c
index 457772d57..76f0f6a0f 100644
--- a/grub-core/kern/efi/mm.c
+++ b/grub-core/kern/efi/mm.c
@@ -38,9 +38,8 @@
a multiplier of 4KB.  */
 #define MEMORY_MAP_SIZE0x3000
 
-/* The minimum and maximum heap size for GRUB itself.  */
-#define MIN_HEAP_SIZE  0x10
-#define MAX_HEAP_SIZE  (1600 * 0x10)
+/* The default heap size for GRUB itself in bytes.  */
+#define DEFAULT_HEAP_SIZE  0x10
 
 static void *finish_mmap_buf = 0;
 static grub_efi_uintn_t finish_mmap_size = 0;
@@ -464,23 +463,6 @@ filter_memory_map (grub_efi_memory_descriptor_t 
*memory_map,
   return filtered_desc;
 }
 
-/* Return the total number of pages.  */
-static grub_efi_uint64_t
-get_total_pages (grub_efi_memory_descriptor_t *memory_map,
-grub_efi_uintn_t desc_size,
-grub_efi_memory_descriptor_t *memory_map_end)
-{
-  grub_efi_memory_descriptor_t *desc;
-  grub_efi_uint64_t total = 0;
-
-  for (desc = memory_map;
-   desc < memory_map_end;
-   desc = NEXT_MEMORY_DESCRIPTOR (desc, desc_size))
-total += desc->num_pages;
-
-  return total;
-}
-
 /* Add memory regions.  */
 static void
 add_memory_regions (grub_efi_memory_descriptor_t *memory_map,
@@ -508,7 +490,7 @@ add_memory_regions (grub_efi_memory_descriptor_t 
*memory_map,
 
   addr = grub_efi_allocate_pages_real (start, pages,
   GRUB_EFI_ALLOCATE_ADDRESS,
-  GRUB_EFI_LOADER_CODE);  
+  GRUB_EFI_LOADER_CODE);
   if (! addr)
grub_fatal ("cannot allocate conventional memory %p with %u pages",
(void *) ((grub_addr_t) start),
@@ -560,8 +542,8 @@ print_memory_map (grub_efi_memory_descriptor_t *memory_map,
 }
 #endif
 
-void
-grub_efi_mm_init (void)
+static grub_err_t
+grub_efi_mm_add_regions (grub_efi_uint64_t required_bytes)
 {
   grub_efi_memory_descriptor_t *memory_map;
   grub_efi_memory_descriptor_t *memory_map_end;
@@ -569,14 +551,12 @@ grub_efi_mm_init (void)
   grub_efi_memory_descriptor_t *filtered_memory_map_end;
   grub_efi_uintn_t map_size;
   grub_efi_uintn_t desc_size;
-  grub_efi_uint64_t total_pages;
-  grub_efi_uint64_t required_pages;
   int mm_status;
 
   /* Prepare a memory region to store two memory maps.  */
   memory_map = grub_efi_allocate_any_pages (2 * BYTES_TO_PAGES 
(MEMORY_MAP_SIZE));
   if (! memory_map)
-grub_fatal ("cannot allocate memory");
+return grub_error (GRUB_ERR_OUT_OF_MEMORY, "cannot allocate memory");
 
   /* Obtain descriptors for available memory.  */
   map_size = MEMORY_MAP_SIZE;
@@ -594,14 +574,14 @@ grub_efi_mm_init (void)
 
   memory_map = grub_efi_allocate_any_pages (2 * BYTES_TO_PAGES (map_size));
   if (! memory_map)
-   grub_fatal ("cannot allocate memory");
+   return grub_error (GRUB_ERR_OUT_OF_MEMORY, "cannot allocate memory");
 
   mm_status = grub_efi_get_memory_map (_size, memory_map, 0,
   _size, 0);
 }
 
   if (mm_status < 0)
-grub_fatal ("cannot get memory map");
+return grub_error (GRUB_ERR_OUT_OF_MEMORY, "cannot get memory map");
 
   memory_map_end = NEXT_MEMORY_DESCRIPTOR (memory_map, map_size);
 
@@ -610,22 +590,14 @@ grub_efi_mm_init (void)
   filtered_memory_map_end = filter_memory_map (memory_map, filtered_memory_map,
   desc_size, memory_map_end);
 
-  /* By default, request a quarter of the available memory.  */
-  total_pages = get_total_pages (filtered_memory_map, desc_size,
-filtered_memory_map_end);
-  required_pages = (total_pages >> 2);
-  if (required_pages < BYTES_TO_PAGES (MIN_HEAP_SIZE))
-required_pages = BYTES_TO_PAGES (MIN_HEAP_SIZE);
-  else if (required_pages > BYTES_TO_PAGES (MAX_HEAP_SIZE))
-required_pages = BYTES_TO_PAGES (MAX_HEAP_SIZE);
-
   /* Sort the filtered descriptors, so that GRUB can allocate pages
  from smaller regions.  */
   sort_memory_map (filtered_memory_map, desc_size, filtered_memory_map_end);
 
   /* Allocate memory regions for GRUB's memory management.  */
   add_memory_regions (filtered_memory_map, desc_size,
-  

[RFC PATCH 0/3] Runtime allocation of memory regions

2020-06-01 Thread Patrick Steinhardt
Hi,

as you may be aware, LUKS2 currently doesn't support the Argon2 key
derival function (KDF). While the implementation itself isn't much of a
problem in theory, one issue we currently have is that Argon2 is a
memory-safe KDF and thus requires huge chunks of memory. The memory
management subsystem isn't currently up to task to handle huge
allocations on all platforms. So back in March, we decided to improve
the MM subsystem first before landing Argon2 support.

This patch series is a first RFC to ask for feedback on my chosen
approach. It implements a new callback that may be set up by the
platform-MM-implementation. In case currently allocated memory regions
don't have sufficient bytes available to satisfy a given memory
allocation, the callback will be invoked in order to make an additional
memory region available that can satisfy the request.

As such, we avoid all the downsides of pre-allocating a huge heap and
can just dynamically handle any huge requests.

I only compile-tested the code, meaning it probably won't work
out-of-the-box. My intention is mainly to first get an agreement on
whether this seems like a sensible approach to fix the problem.

Patrick

Patrick Steinhardt (3):
  mm: allow dynamically requesting additional memory regions
  efi: mm: extract function to add memory regions
  efi: mm: implement runtime addition of pages

 grub-core/kern/efi/mm.c | 66 +
 grub-core/kern/mm.c | 11 ++-
 include/grub/mm.h   | 13 
 3 files changed, 50 insertions(+), 40 deletions(-)

-- 
2.26.2



signature.asc
Description: PGP signature
___
Grub-devel mailing list
Grub-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/grub-devel


Re: [GRUB PATCH RFC 00/18] i386: Intel TXT secure launcher

2020-06-01 Thread Andy Lutomirski
On Mon, Jun 1, 2020 at 8:33 AM Daniel P. Smith
 wrote:
>
> On 5/7/20 7:06 AM, Daniel Kiper wrote:
> > Hi Łukasz,
> >
> > On Tue, May 05, 2020 at 04:38:02PM +0200, Lukasz Hawrylko wrote:
> >> On Tue, 2020-05-05 at 01:21 +0200, Daniel Kiper wrote:
>
> ...
>
> >> In OS-MLE table there is a buffer for TPM event log, however I see that
> >> you are not using it, but instead allocate space somewhere in the
> >
> > I think that this part requires more discussion. In my opinion we should
> > have this region dynamically allocated because it gives us more flexibility.
> > Of course there is a question about the size of this buffer too. I am
> > not sure about that because I have not checked yet how many log entries
> > are created by the SINIT ACM. Though probably it should not be large...
> >
> >> memory. I am just wondering if, from security perspective, it will be
> >> better to use memory from TXT heap for event log, like we do in TBOOT.
> >
> > Appendix F, TPM Event Log, has following sentence: There are no
> > requirements for event log to be in DMA protected memory – SINIT will
> > not enforce it.
> >
> > I was thinking about it and it seems to me that the TPM event log does
> > not require any special protections. Any changes in it can be quickly
> > detected by comparing hashes with the TPM PCRs. Does not it?
> >
>
> I think it would be beneficial to consider the following in deciding
> where the log is placed. There are two areas of attack/manipulation that
> need to be considered. The first area is the log contents itself, which
> as Daniel has pointed out, the log contents do not really need to be
> protected from tampering as that would/should be detected during
> verification by the attestor. The second area that we need to consider
> is the log descriptors themselves. If these are in an area that can be
> manipulated, it is an opportunity for an attacker to attempt to
> influence the ACM's execution. For a little perspective, the ACM
> executes from CRAM to take the most possible precaution to ensure that
> it cannot be tampered with during execution. This is very important
> since it runs a CPU mode (ACM Mode) that I would consider to be higher
> (or lower depending on how you view it) than SMM. As a result, the txt
> heap is also included in what is mapped into CRAM. If the event log is
> place in the heap, this ensures that the ACM is not using memory outside
> of CRAM during execution. Now as Daniel has pointed out, the down side
> to this is that it greatly restricts the log size and can only be
> managed by a combination of limiting the number of events and
> restricting what content is carried in the event data field.

Can you clarify what the actual flow of control is?  If I had to guess, it's:

GRUB (or other bootloader) writes log.

GRUB transfers control to the ACM.  At this point, GRUB is done
running and GRUB code will not run again.

ACM validates system configuration and updates TPM state using magic
privileged TPM access.

ACM transfers control to the shiny new Linux secure launch entry point

Maybe this is right, and maybe this is wrong.  But I have some
questions about this whole setup.  Is the ACM code going to inspect
this log at all?  If so, why?  Who supplies the ACM code?  If the ACM
can be attacked by putting its inputs (e.g. this log) in the wrong
place in memory, why should this be considered anything other than a
bug in the ACM?

If GRUB is indeed done by the time anyone consumes the log, why does
GRUB care where the log ends up?

And finally, the log presumably has nonzero size, and it would be nice
not to pin some physical memory forever for the log.  Could the kernel
copy it into tmpfs during boot so it's at least swappable and then
allow userspace to delete it when userspace is done with it?

___
Grub-devel mailing list
Grub-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/grub-devel


Re: [BOOTLOADER SPECIFICATION RFC] The bootloader log format for TrenchBoot and others

2020-06-01 Thread Hans Ulrich Niedermann
On Fri, 29 May 2020 13:27:35 +0200
Daniel Kiper  wrote:

> Below you can find my rough idea of the bootloader log format which is
> generic thing but initially will be used for TrenchBoot work. I
> discussed this proposal with Ross and Daniel S. So, the idea went
> through initial sanitization. Now I would like to take feedback from
> other folks too. So, please take a look and complain...
> 
> In general we want to pass the messages produced by the bootloader to
> the OS kernel and finally to the userspace for further processing and
> analysis. Below is the description of the structures which will be
> used for this thing.

This should mention that this about having one contiguous block of
memory which contains a struct bootloader_log.

>   struct bootloader_log_msgs

Why the plural with the trailing letter s in the struct name? This looks
like a single message to me, and should thus be signular without a
trailing letter s.

>   {
> grub_uint32_t level;
> grub_uint32_t facility;
> char type[];
> char msg[];
>   }

How would this work? How could a compiler know where msg starts? gcc
just reports "error: flexible array member not at end of struct" here
as there is no way of knowing.

Where are the sizes of type[] and msg[] defined?

Only implicitly by just having two NUL terminated strings right after
each other? That would mean you need to change the struct to

struct bootloader_log_msg
{
  grub_uint32_t level;
  grub_uint32_t facility;
  char strings[];
}

and have anyone parsing this structure walk through all characters in
strings[] looking for NULs to eventually find out where the msg string
actually starts. This looks at least a bit ugly to me, unless you
absolutely need to save the last bit of code and data memory in the
bootloader.

To help find where the msg actually starts without needing to look for
a NUL character, you could add a struct member showing where the msg
string actually starts within strings[]:

struct bootloader_log_msg
{
  grub_uint32_t level;
  grub_uint32_t facility;
  grub_uint32_t msg_start;
  char strings[];
}

This msg_start value could be defined as an character offset into
strings[]. Then accessing msg->strings or >strings[0] would access
the type string, and >strings[msg_ofs] would access the message:

struct bootloader_log_msg *msg = ...;
printf("msg type: %s\n", >strings[0]);
printf("msg:  %s\n", >strings[msg->msg_start]);

For quick parsing of the messages, having a "grub_uint32_t size"
struct member to quickly skip the current struct bootloader_log_msg
completely and jump directly to the next struct would be helpful,
regardless of how the strings are actually stored:

struct bootloader_log_msg
{
  grub_uint32_t level;
  grub_uint32_t facility;
  grub_uint32_t size;
  [ ... string storage ... ]
}

>   struct bootloader_log
>   {
> grub_uint32_t version;
> grub_uint32_t producer;
> grub_uint32_t size;
> grub_uint32_t next_off;
> bootloader_log_msgs msgs[];
>   }
> 
> The members of struct bootloader_log:
>   - version: the bootloader log format version number, 1 for now,
>   - producer: the producer/bootloader type; we can steal some values
> from linux/Documentation/x86/boot.rst:type_of_loader,

>   - size: total size of the log buffer including the bootloader_log
> struct,

"size in bytes"?

>   - next_off: offset in bytes, from start of the bootloader_log
> struct, of the next byte after the last log message in the msgs[];
> i.e. the offset of the next available log message slot,

It appears to me that next_off is only interesting for code which
appends log messages to that structure. For reading such a struct,
next_off is useless and could thus be a private variable inside that
bootloader which is not passed on to a next stage bootloader or an OS
kernel.

So specifying next_off as something other than a "reserved" uint32_t is
for when you have a chain of bootloaders which all append messages to
that buffer, and you want to avoid all the bootloaders having to parse
the messages from the previous bootloader's messages in order to find
out where to append messages. If that is the intention, this procedure
should at least be mentioned somewhere.

I am also missing any mention of memory alignment. With the number of
CPUs in the field which cannot directly read misaligned words
increasing, specifying a 4 byte or 8 byte alignment for these
structures could significantly reduce the code complexity for such
CPUs at the cost of a few bytes of memory.

And while on the topic of memory layout: With all these uint32_t
values, is this only for a 32bit protocol, or will this remain 32bit
even for otherwise 64bit code, or what is the plan here?

>   - msgs: the array of log messages.
> 
> The members of struct bootloader_log_msgs:
>   - level: similar to syslog meaning; can be used to differentiate
> normal messages from debug messages; exact 

Re: [GRUB PATCH RFC 00/18] i386: Intel TXT secure launcher

2020-06-01 Thread Ross Philipson
On 6/1/20 1:56 PM, Daniel P. Smith wrote:
> On 6/1/20 12:51 PM, Andy Lutomirski wrote:
>> On Mon, Jun 1, 2020 at 8:33 AM Daniel P. Smith
>>  wrote:
>>>
>>> On 5/7/20 7:06 AM, Daniel Kiper wrote:
 Hi Łukasz,

 On Tue, May 05, 2020 at 04:38:02PM +0200, Lukasz Hawrylko wrote:
> On Tue, 2020-05-05 at 01:21 +0200, Daniel Kiper wrote:
>>>
>>> ...
>>>
> In OS-MLE table there is a buffer for TPM event log, however I see that
> you are not using it, but instead allocate space somewhere in the

 I think that this part requires more discussion. In my opinion we should
 have this region dynamically allocated because it gives us more 
 flexibility.
 Of course there is a question about the size of this buffer too. I am
 not sure about that because I have not checked yet how many log entries
 are created by the SINIT ACM. Though probably it should not be large...

> memory. I am just wondering if, from security perspective, it will be
> better to use memory from TXT heap for event log, like we do in TBOOT.

 Appendix F, TPM Event Log, has following sentence: There are no
 requirements for event log to be in DMA protected memory – SINIT will
 not enforce it.

 I was thinking about it and it seems to me that the TPM event log does
 not require any special protections. Any changes in it can be quickly
 detected by comparing hashes with the TPM PCRs. Does not it?

>>>
>>> I think it would be beneficial to consider the following in deciding
>>> where the log is placed. There are two areas of attack/manipulation that
>>> need to be considered. The first area is the log contents itself, which
>>> as Daniel has pointed out, the log contents do not really need to be
>>> protected from tampering as that would/should be detected during
>>> verification by the attestor. The second area that we need to consider
>>> is the log descriptors themselves. If these are in an area that can be
>>> manipulated, it is an opportunity for an attacker to attempt to
>>> influence the ACM's execution. For a little perspective, the ACM
>>> executes from CRAM to take the most possible precaution to ensure that
>>> it cannot be tampered with during execution. This is very important
>>> since it runs a CPU mode (ACM Mode) that I would consider to be higher
>>> (or lower depending on how you view it) than SMM. As a result, the txt
>>> heap is also included in what is mapped into CRAM. If the event log is
>>> place in the heap, this ensures that the ACM is not using memory outside
>>> of CRAM during execution. Now as Daniel has pointed out, the down side
>>> to this is that it greatly restricts the log size and can only be
>>> managed by a combination of limiting the number of events and
>>> restricting what content is carried in the event data field.
>>
>> Can you clarify what the actual flow of control is?  If I had to guess, it's:
>>
>> GRUB (or other bootloader) writes log.
>>
>> GRUB transfers control to the ACM.  At this point, GRUB is done
>> running and GRUB code will not run again.
>>
>> ACM validates system configuration and updates TPM state using magic
>> privileged TPM access.
>>
>> ACM transfers control to the shiny new Linux secure launch entry point
>>
>> Maybe this is right, and maybe this is wrong.  But I have some
>> questions about this whole setup.  Is the ACM code going to inspect
>> this log at all?  If so, why?  Who supplies the ACM code?  If the ACM
>> can be attacked by putting its inputs (e.g. this log) in the wrong
>> place in memory, why should this be considered anything other than a
>> bug in the ACM?
> 
> There is a lot behind that, so to get a complete detail of the event
> sequence I would recommend looking at Section Vol. 2D 6.2.3 (pg Vol. 2D
> 6-5/ pdf pg 2531), 6.3 GETSEC[ENTERACCS](pg 6-10 Vol. 2D/pdf pg 2546),
> and 6.3 GETSEC[SENTER](pg Vol. 2D 6-21/pdf pg 2557) in the Intel SDM[1].
> Section 6.2.3 gives a slightly detailed overview. Section
> GETSEC[ENTERACCS] details the requirements/procedures for entering AC
> execution mode and ACRAM (Authenticated CRAM) and section GETSEC[SENTER]
> will detail requirements/procedures for SENTER.
> 
> To answer you additional questions I would say if you look at all the
> work that goes into protecting the ACM execution, why would you want to
> then turn around and have it use memory outside of the protected region.
> On the other hand, you are right, if the Developer's Guide says it
> doesn't need to be protected and someone somehow finds a way to cause a
> failure in the ACM through the use of a log outside of CRAM, then
> rightfully that is a bug in the ACM. This is why I asked about making it
> configurable, paranoid people could set the configuration to use the
> heap and all others could just use an external location.

After thinking about it, it should be easy to make it configurable since
as stated it is up the the pre-launch code to decide where the buffer
is. To do 

Re: [GRUB PATCH RFC 00/18] i386: Intel TXT secure launcher

2020-06-01 Thread Daniel P. Smith
On 6/1/20 12:51 PM, Andy Lutomirski wrote:
> On Mon, Jun 1, 2020 at 8:33 AM Daniel P. Smith
>  wrote:
>>
>> On 5/7/20 7:06 AM, Daniel Kiper wrote:
>>> Hi Łukasz,
>>>
>>> On Tue, May 05, 2020 at 04:38:02PM +0200, Lukasz Hawrylko wrote:
 On Tue, 2020-05-05 at 01:21 +0200, Daniel Kiper wrote:
>>
>> ...
>>
 In OS-MLE table there is a buffer for TPM event log, however I see that
 you are not using it, but instead allocate space somewhere in the
>>>
>>> I think that this part requires more discussion. In my opinion we should
>>> have this region dynamically allocated because it gives us more flexibility.
>>> Of course there is a question about the size of this buffer too. I am
>>> not sure about that because I have not checked yet how many log entries
>>> are created by the SINIT ACM. Though probably it should not be large...
>>>
 memory. I am just wondering if, from security perspective, it will be
 better to use memory from TXT heap for event log, like we do in TBOOT.
>>>
>>> Appendix F, TPM Event Log, has following sentence: There are no
>>> requirements for event log to be in DMA protected memory – SINIT will
>>> not enforce it.
>>>
>>> I was thinking about it and it seems to me that the TPM event log does
>>> not require any special protections. Any changes in it can be quickly
>>> detected by comparing hashes with the TPM PCRs. Does not it?
>>>
>>
>> I think it would be beneficial to consider the following in deciding
>> where the log is placed. There are two areas of attack/manipulation that
>> need to be considered. The first area is the log contents itself, which
>> as Daniel has pointed out, the log contents do not really need to be
>> protected from tampering as that would/should be detected during
>> verification by the attestor. The second area that we need to consider
>> is the log descriptors themselves. If these are in an area that can be
>> manipulated, it is an opportunity for an attacker to attempt to
>> influence the ACM's execution. For a little perspective, the ACM
>> executes from CRAM to take the most possible precaution to ensure that
>> it cannot be tampered with during execution. This is very important
>> since it runs a CPU mode (ACM Mode) that I would consider to be higher
>> (or lower depending on how you view it) than SMM. As a result, the txt
>> heap is also included in what is mapped into CRAM. If the event log is
>> place in the heap, this ensures that the ACM is not using memory outside
>> of CRAM during execution. Now as Daniel has pointed out, the down side
>> to this is that it greatly restricts the log size and can only be
>> managed by a combination of limiting the number of events and
>> restricting what content is carried in the event data field.
> 
> Can you clarify what the actual flow of control is?  If I had to guess, it's:
> 
> GRUB (or other bootloader) writes log.
> 
> GRUB transfers control to the ACM.  At this point, GRUB is done
> running and GRUB code will not run again.
> 
> ACM validates system configuration and updates TPM state using magic
> privileged TPM access.
> 
> ACM transfers control to the shiny new Linux secure launch entry point
> 
> Maybe this is right, and maybe this is wrong.  But I have some
> questions about this whole setup.  Is the ACM code going to inspect
> this log at all?  If so, why?  Who supplies the ACM code?  If the ACM
> can be attacked by putting its inputs (e.g. this log) in the wrong
> place in memory, why should this be considered anything other than a
> bug in the ACM?

There is a lot behind that, so to get a complete detail of the event
sequence I would recommend looking at Section Vol. 2D 6.2.3 (pg Vol. 2D
6-5/ pdf pg 2531), 6.3 GETSEC[ENTERACCS](pg 6-10 Vol. 2D/pdf pg 2546),
and 6.3 GETSEC[SENTER](pg Vol. 2D 6-21/pdf pg 2557) in the Intel SDM[1].
Section 6.2.3 gives a slightly detailed overview. Section
GETSEC[ENTERACCS] details the requirements/procedures for entering AC
execution mode and ACRAM (Authenticated CRAM) and section GETSEC[SENTER]
will detail requirements/procedures for SENTER.

To answer you additional questions I would say if you look at all the
work that goes into protecting the ACM execution, why would you want to
then turn around and have it use memory outside of the protected region.
On the other hand, you are right, if the Developer's Guide says it
doesn't need to be protected and someone somehow finds a way to cause a
failure in the ACM through the use of a log outside of CRAM, then
rightfully that is a bug in the ACM. This is why I asked about making it
configurable, paranoid people could set the configuration to use the
heap and all others could just use an external location.

> If GRUB is indeed done by the time anyone consumes the log, why does
> GRUB care where the log ends up?

This is because the log buffer allocation was made the responsibility of
the pre-launch environment, in this case GRUB, and is communicated to
the ACM via the os_to_mle structure.

> And finally, the 

Re: [GRUB PATCH RFC 00/18] i386: Intel TXT secure launcher

2020-06-01 Thread Daniel P. Smith
On 5/7/20 7:06 AM, Daniel Kiper wrote:
> Hi Łukasz,
> 
> On Tue, May 05, 2020 at 04:38:02PM +0200, Lukasz Hawrylko wrote:
>> On Tue, 2020-05-05 at 01:21 +0200, Daniel Kiper wrote:

...

>> In OS-MLE table there is a buffer for TPM event log, however I see that
>> you are not using it, but instead allocate space somewhere in the
> 
> I think that this part requires more discussion. In my opinion we should
> have this region dynamically allocated because it gives us more flexibility.
> Of course there is a question about the size of this buffer too. I am
> not sure about that because I have not checked yet how many log entries
> are created by the SINIT ACM. Though probably it should not be large...
> 
>> memory. I am just wondering if, from security perspective, it will be
>> better to use memory from TXT heap for event log, like we do in TBOOT.
> 
> Appendix F, TPM Event Log, has following sentence: There are no
> requirements for event log to be in DMA protected memory – SINIT will
> not enforce it.
> 
> I was thinking about it and it seems to me that the TPM event log does
> not require any special protections. Any changes in it can be quickly
> detected by comparing hashes with the TPM PCRs. Does not it?
> 

I think it would be beneficial to consider the following in deciding
where the log is placed. There are two areas of attack/manipulation that
need to be considered. The first area is the log contents itself, which
as Daniel has pointed out, the log contents do not really need to be
protected from tampering as that would/should be detected during
verification by the attestor. The second area that we need to consider
is the log descriptors themselves. If these are in an area that can be
manipulated, it is an opportunity for an attacker to attempt to
influence the ACM's execution. For a little perspective, the ACM
executes from CRAM to take the most possible precaution to ensure that
it cannot be tampered with during execution. This is very important
since it runs a CPU mode (ACM Mode) that I would consider to be higher
(or lower depending on how you view it) than SMM. As a result, the txt
heap is also included in what is mapped into CRAM. If the event log is
place in the heap, this ensures that the ACM is not using memory outside
of CRAM during execution. Now as Daniel has pointed out, the down side
to this is that it greatly restricts the log size and can only be
managed by a combination of limiting the number of events and
restricting what content is carried in the event data field.

With everything that has been said, I am wondering if it would it be
possible to make it configurable. Thoughts?

V/r,
DPS


___
Grub-devel mailing list
Grub-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/grub-devel


Re: [GRUB PATCH RFC 15/18] i386/txt: Add Intel TXT core implementation

2020-06-01 Thread Ross Philipson
On 5/22/20 9:24 AM, Krystian Hebel wrote:
> 
> On 05.05.2020 01:21, Daniel Kiper wrote:
>> +static grub_err_t
>> +init_txt_heap (struct grub_slaunch_params *slparams, struct
>> grub_txt_acm_header *sinit)
>> +{
>> +  grub_uint8_t *txt_heap;
>> +  grub_uint32_t os_sinit_data_ver, sinit_caps;
>> +  grub_uint64_t *size;
>> +  struct grub_txt_os_mle_data *os_mle_data;
>> +  struct grub_txt_os_sinit_data *os_sinit_data;
>> +  struct grub_txt_heap_end_element *heap_end_element;
>> +  struct grub_txt_heap_event_log_pointer2_1_element
>> *heap_event_log_pointer2_1_element;
>> +#ifdef GRUB_MACHINE_EFI
>> +  struct grub_acpi_rsdp_v20 *rsdp;
>> +#endif
>> +
>> +  /* BIOS data already verified in grub_txt_verify_platform(). */
>> +
>> +  txt_heap = grub_txt_get_heap ();
>> +
>> +  /* OS/loader to MLE data. */
>> +
>> +  os_mle_data = grub_txt_os_mle_data_start (txt_heap);
>> +  size = (grub_uint64_t *) ((grub_addr_t) os_mle_data - sizeof
>> (grub_uint64_t));
> There is 'grub_txt_os_mle_data_size()' in previous patch, it would look
> better.
>> +  *size = sizeof (*os_mle_data) + sizeof (grub_uint64_t);
>> +
>> +  grub_memset (os_mle_data, 0, sizeof (*os_mle_data));
>> +
>> +  os_mle_data->version = GRUB_SL_OS_MLE_STRUCT_VERSION;
>> +  os_mle_data->zero_page_addr = (grub_uint32_t)(grub_addr_t)
>> slparams->params;
>> +  os_mle_data->saved_misc_enable_msr = grub_rdmsr
>> (GRUB_MSR_X86_MISC_ENABLE);
>> +
>> +  os_mle_data->ap_wake_block = slparams->ap_wake_block;
>> +
>> +  save_mtrrs (os_mle_data);
>> +
>> +  /* OS/loader to SINIT data. */
>> +
>> +  os_sinit_data_ver = grub_txt_supported_os_sinit_data_ver (sinit);
>> +
>> +  if (os_sinit_data_ver < OS_SINIT_DATA_MIN_VER)
>> +    return grub_error (GRUB_ERR_BAD_DEVICE,
>> +   N_("unsupported OS to SINIT data version in SINIT ACM:
>> %d"
>> +   " expected >= %d"), os_sinit_data_ver,
>> OS_SINIT_DATA_MIN_VER);
>> +
>> +  os_sinit_data = grub_txt_os_sinit_data_start (txt_heap);
>> +  size = (grub_uint64_t *) ((grub_addr_t) os_sinit_data - sizeof
>> (grub_uint64_t));
> Ditto
>> +
>> +  *size = sizeof(grub_uint64_t) + sizeof (struct
>> grub_txt_os_sinit_data) +
>> +  sizeof (struct grub_txt_heap_end_element);
>> +
>> +  if (grub_get_tpm_ver () == GRUB_TPM_12)
>> +    *size += sizeof (struct grub_txt_heap_tpm_event_log_element);
>> +  else if (grub_get_tpm_ver () == GRUB_TPM_20)
>> +    *size += sizeof (struct grub_txt_heap_event_log_pointer2_1_element);
>> +  else
>> +    return grub_error (GRUB_ERR_BAD_DEVICE, N_("unsupported TPM
>> version"));
>> +
>> +  grub_memset (os_sinit_data, 0, *size);
>> +
>> +#ifdef GRUB_MACHINE_EFI
>> +  rsdp = grub_acpi_get_rsdpv2 ();
>> +
>> +  if (rsdp == NULL)
>> +    return grub_printf ("WARNING: ACPI RSDP 2.0 missing\n");
>> +
>> +  os_sinit_data->efi_rsdt_ptr = (grub_uint64_t)(grub_addr_t) rsdp;
>> +#endif
>> +
>> +  os_sinit_data->mle_ptab = slparams->mle_ptab_target;
>> +  os_sinit_data->mle_size = slparams->mle_size;
>> +
>> +  os_sinit_data->mle_hdr_base = slparams->mle_header_offset;
>> +
>> +  /* TODO: Check low PMR with RMRR. Look at relevant tboot code too. */
>> +  /* TODO: Kernel should not allocate any memory outside of PMRs
>> regions!!! */
>> +  os_sinit_data->vtd_pmr_lo_base = 0;
>> +  os_sinit_data->vtd_pmr_lo_size = ALIGN_DOWN (grub_mmap_get_highest
>> (0x1),
>> +   GRUB_TXT_PMR_ALIGN);
>> +
>> +  os_sinit_data->vtd_pmr_hi_base = ALIGN_UP (grub_mmap_get_lowest
>> (0x1),
>> + GRUB_TXT_PMR_ALIGN);
>> +  os_sinit_data->vtd_pmr_hi_size = ALIGN_DOWN (grub_mmap_get_highest
>> (0x),
>> +   GRUB_TXT_PMR_ALIGN);
>> +  os_sinit_data->vtd_pmr_hi_size -= os_sinit_data->vtd_pmr_hi_base;
> Could it be done with just one PMR, from 0 to top of memory, or would

No because there could be these legacy reserved regions in somewhere
below 4G that might cause hangs if you block DMA to them. So the idea is
the high PMR covers everything > 4G and the low PMR needs special logic
to not block these reserved regions. We are working on better logic for
this now.

> TXT complain?
>> +
>> +  grub_dprintf ("slaunch",
>> +    "vtd_pmr_lo_base: 0x%" PRIxGRUB_UINT64_T " vtd_pmr_lo_size: 0x%"
>> +    PRIxGRUB_UINT64_T " vtd_pmr_hi_base: 0x%" PRIxGRUB_UINT64_T
>> +    " vtd_pmr_hi_size: 0x%" PRIxGRUB_UINT64_T "\n",
>> +    os_sinit_data->vtd_pmr_lo_base, os_sinit_data->vtd_pmr_lo_size,
>> +    os_sinit_data->vtd_pmr_hi_base, os_sinit_data->vtd_pmr_hi_size);
>> +
>>
>> 
>>
>> +/*
>> + * The MLE page tables have to be below the MLE and have no special
>> regions in
>> + * between them and the MLE (this is a bit of an unwritten rule).
>> + * 20 pages are carved out of memory below the MLE. That leave 18
>> page table
>> + * pages that can cover up to 36M .
>> + * can only contain 4k pages
>> + *
>> + * TODO: TXT Spec p.32; List section name and number with PT MLE
>> requirments here.
>> + *
>> + * TODO: This 

[PATCH] disk/loopback: Don't verify loopback images

2020-06-01 Thread Chris Coulson
When a file is verified, the entire contents of the verified file are
loaded in to memory and retained until the file handle is closed. A
consequence of this is that opening a loopback image can incur a
significant memory cost.

As loopback devices are just another disk implementation, don't treat
loopback images any differently to physical disk images, and skip
verification of them. Files opened from the filesystem within a loopback
image will still be passed to verifier modules where required.

Signed-off-by: Chris Coulson 
---
 grub-core/disk/loopback.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/grub-core/disk/loopback.c b/grub-core/disk/loopback.c
index cdf9123fa..01267e577 100644
--- a/grub-core/disk/loopback.c
+++ b/grub-core/disk/loopback.c
@@ -93,7 +93,8 @@ grub_cmd_loopback (grub_extcmd_context_t ctxt, int argc, char 
**args)
 return grub_error (GRUB_ERR_BAD_ARGUMENT, N_("filename expected"));
 
   file = grub_file_open (args[1], GRUB_FILE_TYPE_LOOPBACK
-| GRUB_FILE_TYPE_NO_DECOMPRESS);
+| GRUB_FILE_TYPE_NO_DECOMPRESS |
+GRUB_FILE_TYPE_SKIP_SIGNATURE);
   if (! file)
 return grub_errno;
 
-- 
2.25.1


___
Grub-devel mailing list
Grub-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/grub-devel


Re: [BOOTLOADER SPECIFICATION RFC] The bootloader log format for TrenchBoot and others

2020-06-01 Thread Roger Pau Monné
On Fri, May 29, 2020 at 01:27:35PM +0200, Daniel Kiper wrote:
> Hey,
> 
> Below you can find my rough idea of the bootloader log format which is
> generic thing but initially will be used for TrenchBoot work. I discussed
> this proposal with Ross and Daniel S. So, the idea went through initial
> sanitization. Now I would like to take feedback from other folks too.
> So, please take a look and complain...
> 
> In general we want to pass the messages produced by the bootloader to the OS
> kernel and finally to the userspace for further processing and analysis. Below
> is the description of the structures which will be used for this thing.
> 
>   struct bootloader_log_msgs
>   {
> grub_uint32_t level;
> grub_uint32_t facility;

Nit: if this is aimed at cross-OS and cross-bootloader compatibility
uint32_t should be used here instead of a grub specific alias.

> char type[];
> char msg[];

I think you want char * here instead? Or are the above arrays expected
to have a fixed size in the final spec?

>   }
> 
>   struct bootloader_log
>   {
> grub_uint32_t version;
> grub_uint32_t producer;
> grub_uint32_t size;
> grub_uint32_t next_off;
> bootloader_log_msgs msgs[];

As I understand it this is not a pointer to an array of
bootloader_log_msgs but rather in-place?

>   }
> 
> The members of struct bootloader_log:
>   - version: the bootloader log format version number, 1 for now,
>   - producer: the producer/bootloader type; we can steal some values from
> linux/Documentation/x86/boot.rst:type_of_loader,
>   - size: total size of the log buffer including the bootloader_log struct,
>   - next_off: offset in bytes, from start of the bootloader_log struct,
> of the next byte after the last log message in the msgs[];
> i.e. the offset of the next available log message slot,

So this will be a memory area that's shared between the OS and the
bootloader and needs to be updated at runtime?

If this is something that's created by the bootloader during loading
and passed to the OS for consumption (but it's not further updated), I
don't see much point in the next_off field. The size field could be
updated to reflect the actual size of the produced messages?

Roger.

___
Grub-devel mailing list
Grub-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/grub-devel


Re: [BOOTLOADER SPECIFICATION RFC] The bootloader log format for TrenchBoot and others

2020-06-01 Thread Daniel Kiper
On Fri, May 29, 2020 at 10:11:40AM -0700, Andy Lutomirski wrote:
> > On May 29, 2020, at 4:30 AM, Daniel Kiper  wrote:
> >
> > Hey,
> >
> > Below you can find my rough idea of the bootloader log format which is
> > generic thing but initially will be used for TrenchBoot work. I discussed
> > this proposal with Ross and Daniel S. So, the idea went through initial
> > sanitization. Now I would like to take feedback from other folks too.
> > So, please take a look and complain...
>
> > In general we want to pass the messages produced by the bootloader to the OS
> > kernel and finally to the userspace for further processing and analysis. 
> > Below
> > is the description of the structures which will be used for this thing.
>
> Is the intent for a human to read these, or to get them into the
> system log file, or are they intended to actually change the behavior
> of the system?
>
> IOW, what is this for?

In general the idea is for a human to read these. There will be a separate
tool which reads the bootloader log exposed via Linux kernel sysfs and displays
its contents to the user. The tool will allow user to do various kinds of
filtering. We are not going to put the contents of the bootloader log into any
of system logs. However, if somebody wants to do that, in sensible way, I am
OK with that.

Daniel

___
Grub-devel mailing list
Grub-devel@gnu.org
https://lists.gnu.org/mailman/listinfo/grub-devel