Re: [PATCH v7 02/13] Documentation/x86: Secure Launch kernel documentation
assessment. In theory, the solution employing a TPM seal will have established what the expected sequence of measurements should be, and ensured the TPM seal was the final and correct measurement. I don't know if you will find it too rudimentary, but I feel I did a fairly decent job covering on this in the first ever TrenchBoot talk[1]. [1] https://www.platformsecuritysummit.com/2018/speaker/smith/ The rest of the documentation was easy to understand and very helpful to understanding system launch integrity. Thanks! I am very glad to hear you found it helpful. This is a very complex topic, and trying to break it all down for an audience that may have zero background and an interest to help is no small undertaking. V/r, Daniel P. Smith
Re: [PATCH v4 00/14] x86: Trenchboot secure dynamic launch Linux kernel support
Paul, Apologies for missing your follow-up questions. Hopefully, the below answers will help. On 1/21/22 16:39, Paul Moore wrote: > On Mon, Dec 6, 2021 at 3:56 PM Paul Moore wrote: >> On Thu, Dec 2, 2021 at 11:11 AM Daniel P. Smith >> wrote: >>> Hi Paul! >> >> /me waves >> >>> On 11/30/21 8:06 PM, Paul Moore wrote: >>>> On Fri, Aug 27, 2021 at 9:20 AM Ross Philipson >>>> wrote: >>>>> >>>>> The larger focus of the Trechboot project (https://github.com/TrenchBoot) >>>>> is to >>>>> enhance the boot security and integrity in a unified manner. The first >>>>> area of >>>>> focus has been on the Trusted Computing Group's Dynamic Launch for >>>>> establishing >>>>> a hardware Root of Trust for Measurement, also know as DRTM (Dynamic Root >>>>> of >>>>> Trust for Measurement). >>>> >>>> My apologies for such a late reply, but I'm just getting around to >>>> looking at this and I have a few questions on the basic design/flow >>>> (below) ... >>> >>> No worries, thank you so much for taking the time to review. >>> >>>>> The basic flow is: >>>>> >>>>> - Entry from the dynamic launch jumps to the SL stub >>>> >>>> So I'm clear, at this point the combined stub+kernel+initramfs+cmdline >>>> image has already been loaded into memory and the SL stub is >>>> executing, yes? >>> >>> That is correct. >>> >>>> As TrenchBoot seems to be focused on boot measurement and not >>>> enforcing policy, I'm guessing this is considered out-of-scope (not to >>>> mention that the combined stub+kernel image makes this less >>>> interesting), but has any thought been given to leveraging the TXT >>>> launch control policy, or is it simply an empty run-everything policy? >>> >>> The TrenchBoot model is a bit different and takes a more flexible >>> approach to allow users to build tailored solutions. For instance Secure >>> Launch is able to be used in a configuration that is similar to tboot. >>> Consider the functions of tboot, it has a portion that is the >>> post-launch kernel that handles the handover from the ACM and a portion >>> that provides the Verified Launch policy engine, which is only capable >>> of enforcing policy on what is contained in the Multiboot chain. The >>> TrenchBoot approach is to introduce the Secure Launch capability into a >>> kernel, in this case Linux, to handle the handover from the ACM, and >>> then transition to a running user space that can contain a distribution >>> specific policy enforcement. As an example, the TrenchBoot project >>> contributed to the uroot project a Secure Launch policy engine which >>> enables the creation of an initramfs image which can then be embedded >>> into a minimal configuration Secure Launch Linux kernel ... >> >> Thank you for the answers, that was helpful. >> >> I think I initially misunderstood TrenchBoot, thinking that a Secure >> Launch'd kernel/userspace would be the "normal" OS that would >> transition to multi-user mode and be available for users and >> applications. However, on reading your response it appears that the >> Secure Launch'd kernel/initramfs exists only to verify a secondary >> kernel/initramfs/userspace and then kexec() into that once verified. Yes it can be used in this manner but this is not the only way it was intended to be used. The goal is to enable an integrator, e.g, a distro, to incorporate Linux Secure Launch appropriately for their security needs, though ideally it would be preferred that a standardized approach is adopted by Linux init tooling to provide common experience across distros. Up until the introduction of Secure Launch, the only widely deployed model for DRTM has been to use tboot. Tboot is an MLE/DLME that functions as an exokernel and an intermediate loader for the Runtime OS/MLE. This motivated the first exemplar solution to be a Linux Secure Launch + uroot solution that would provide a similar intermediate loader experience, but with an expanded ability of the uroot to measure additional properties about a system. As a result a distro could use the exemplar to replace tboot, tboot VL policy tools, and VL policy file with a Secure Launch kernel, a u-root initrd (built-in or standalone), and a JSON policy file. By no means was Secure Launch meant to be limited to only being used as an intermediate loader for a Runtime OS. There is nothing that proh
Re: [PATCH v4 00/14] x86: Trenchboot secure dynamic launch Linux kernel support
uld be extended. Like the tboot VL policy engine, the u-root policy engine is configurable where the measurements are stored. As highlighted, more components are measured as part of Secure Launch than for tboot. The approach taken was to model after DA and put binaries into 17 and configuration into 18. Later there were requirements to isolate certain measurements. For the time this is provided through kconfig to move config to 19 and initrd to 20. In the future if/when additional measurements are incorporated, such as signing keys that are embedded into the kernel, then it may be necessary to provide a means to configure PCR usage at runtime. >> - Kernel boot proceeds normally from this point. >> - During early setup, slaunch_setup() runs to finish some validation >>and setup tasks. >> - The SMP bringup code is modified to wake the waiting APs. APs vector >>to rmpiggy and start up normally from that point. >> - SL platform module is registered as a late initcall module. It reads >>the TPM event log and extends the measurements taken into the TPM PCRs. > > I'm sure there is some issue with passing data across boundaries, but > is there any reason why the TPM event log needs to be updated > out-of-sync with the TPM PCRs? Is is possible to pass the > measurements to the SL platform module which would both extend the > PCRs and update the TPM event log at the same time? Without rehashing the issues around TPM usage from the stub, the core issue is that measurements need to be collected before usage which is at a time when access to the TPM is not possible at this time. Thus the measurements need to be collected and persisted in a location where they can be retrieved when the main kernel is in control. It was felt using the DRTM TPM log was a natural place as it persists all the info necessary to record the measurements by the SL platform module. Though it does result in there being two non-event entries to exist in the log but those are standardized such that any TCG compliant log parser should ignore them. With the explanation of why it was done aside, if there is another location that is preferred which can provide the necessary guarantees, then there is no reason why they could not be switched to that location. >> - SL platform module initializes the securityfs interface to allow >>asccess to the TPM event log and TXT public registers. >> - Kernel boot finishes booting normally >> - SEXIT support to leave SMX mode is present on the kexec path and >>the various reboot paths (poweroff, reset, halt). > > It doesn't look like it's currently implemented, but it looks like > eventually you plan to support doing a new DRTM measurement on kexec, > is that correct? I'm sure that is something a *lot* of people > (including myself) would like to see happen. Correct, relaunch is not currently implemented but has always been planned as relaunch enables DRTM late-launch use cases. For a few reasons this is being elevated in priority and as a result there is a short-term solution to quickly enable relaunch with longer term direct integration into kexec. Finally if your schedule allows it and it is not too much to ask, it would be greatly appreciated if some code review could be provided. Otherwise thank you for taking the time that you have to review the approach. V/r, Daniel P. Smith Apertus Solutions, LLC ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v4 14/14] tpm: Allow locality 2 to be set when initializing the TPM for Secure Launch
On 8/27/21 9:30 AM, Jason Gunthorpe wrote: > On Fri, Aug 27, 2021 at 09:28:37AM -0400, Ross Philipson wrote: >> The Secure Launch MLE environment uses PCRs that are only accessible from >> the DRTM locality 2. By default the TPM drivers always initialize the >> locality to 0. When a Secure Launch is in progress, initialize the >> locality to 2. >> >> Signed-off-by: Ross Philipson >> --- >> drivers/char/tpm/tpm-chip.c | 9 - >> 1 file changed, 8 insertions(+), 1 deletion(-) > > Global state like this seems quite dangerous, shouldn't the locality > be selected based on the PCR being accessed, or passed down from > higher up in the call chain? > > Jason > Hey Jason, While locality does control which PCRs are accessible, it is meant to reflect what component that a TPM command is originating. To quote the TCG Spec, "“Locality” is an assertion to the TPM that a command’s source is associated with a particular component. Locality can be thought of as a hardware-based authorization." Thus when the SRTM chain, to include the Static OS, is in control, the hardware is required to restrict access to only Locality 0. Once a Dynamic Launch has occurred, the hardware grants access to Locality 1 and 2. Again to refer to the TCG spec, the definition of Locality 2 is, the "Dynamically Launched OS (Dynamic OS) “runtime” environment". When Linux is started from the SRTM, it is correct for the TPM driver to set the locality to Locality 0 to denote that the source is the Static OS. Now when Linux is started from the DRTM, the TPM driver should set the locality to Locality 2 to denote that it is the "runtime" Dynamic OS. As for the differences in access, with Locality 0 Linux (being the Static OS) is restricted to the Static PCRs (0-15), the Debug PCR (16), and the Application PCR (23). Whereas with Locality 2 Linux now being the Dynamic OS will have access to all PCRs. To summarize, TPM locality really is a global state that reflects the component in control of the platform. Considering the definition of locality, setting the locality to Locality 0 is saying that the Static Environment is now in control. Doing so after the Dynamic Environment has started would be an inaccurate setting of the platform state. The correct time at which the locality should change back to Locality 0 is after the Dynamic Environment has been exited. On Intel this can be done by invoking the instruction GETSEC[SEXIT]. It should be noted that Secure Launch adds the call to the GETSEC[SEXIT] instruction in the kexec, reboot, and shutdown paths. v/r, dps ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v3 00/14] x86: Trenchboot secure dynamic launch Linux kernel support
On 8/10/21 12:23 PM, Jarkko Sakkinen wrote: > On Mon, Aug 09, 2021 at 12:38:42PM -0400, Ross Philipson wrote: >> The focus of Trechboot project (https://github.com/TrenchBoot) is to >> enhance the boot security and integrity. This requires the linux kernel > ~ > Linux > > How does it enhance it? The following sentence explains the requirements > for the Linux kernel, i.e. it's a question without answer. And if there > is no answer, there is no need to merge this. We have added a documentation patch that provides background information, an overview of the capability, and details about the implementation. We can reword the cover letter, adding reference to this documentation. And ack on fixing the incorrect case on Linux. V/r dps ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH v2 12/12] iommu: Do not allow IOMMU passthrough with Secure Launch
On 6/21/21 5:15 PM, Andy Lutomirski wrote: On Mon, Jun 21, 2021 at 10:51 AM Ross Philipson wrote: On 6/18/21 2:32 PM, Robin Murphy wrote: On 2021-06-18 17:12, Ross Philipson wrote: @@ -2761,7 +2762,10 @@ void iommu_set_default_passthrough(bool cmd_line) { if (cmd_line) iommu_cmd_line |= IOMMU_CMD_LINE_DMA_API; -iommu_def_domain_type = IOMMU_DOMAIN_IDENTITY; + +/* Do not allow identity domain when Secure Launch is configured */ +if (!(slaunch_get_flags() & SL_FLAG_ACTIVE)) +iommu_def_domain_type = IOMMU_DOMAIN_IDENTITY; Quietly ignoring the setting and possibly leaving iommu_def_domain_type uninitialised (note that 0 is not actually a usable type) doesn't seem great. AFAICS this probably warrants similar treatment to the Ok so I guess it would be better to set it to IOMMU_DOMAIN_DMA event though passthrough was requested. Or perhaps something more is needed here? mem_encrypt_active() case - there doesn't seem a great deal of value in trying to save users from themselves if they care about measured boot yet explicitly pass options which may compromise measured boot. If you really want to go down that route there's at least the sysfs interface you'd need to nobble as well, not to mention the various ways of completely disabling IOMMUs... Doing a secure launch with the kernel is not a general purpose user use case. A lot of work is done to secure the environment. Allowing passthrough mode would leave the secure launch kernel exposed to DMA. I think what we are trying to do here is what we intend though there may be a better way or perhaps it is incomplete as you suggest. I don't really like all these special cases. Generically, what you're trying to do is (AFAICT) to get the kernel to run in a mode in which it does its best not to trust attached devices. Nothing about this is specific to Secure Launch. There are plenty of scenarios in which this the case: - Virtual devices in a VM host outside the TCB, e.g. VDUSE, Xen device domains (did I get the name right), whatever tricks QEMU has, etc. - SRTM / DRTM technologies (including but not limited to Secure Launch -- plain old Secure Boot can work like this too). - Secure guest technologies, including but not limited to TDX and SEV. - Any computer with a USB-C port or other external DMA-capable port. - Regular computers in which the admin wants to enable this mode for whatever reason. Can you folks all please agree on a coordinated way for a Linux kernel to configure itself appropriately? Or to be configured via initramfs, boot option, or some other trusted source of configuration supplied at boot time? We don't need a whole bunch of if (TDX), if (SEV), if (secure launch), if (I have a USB-C port with PCIe exposed), if (running on Xen), and similar checks all over the place. Hey Andy, On behalf of Ross and myself I wanted to follow up on the points raised here. While there is an interest to ensure a system is properly configured we should not be blocking the user from configuring the system as they desire. Instead we are taking the approach to document the SecureLaunch capability, in particular the recommend way to configure the kernel to appropriately use the capability using the already existing methods such as using kernel parameters. Hopefully that will address the concerns in the short term. Looking forward, we do have a vested interest in ensuring there is an ability to configure access control for security and safety critical solutions and would be grateful if we would be included in any discussions or working groups that might be looking into unifying how all these security technologies should be configuring the Linux kernel. V/r, Daniel P. Smith ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH 05/13] x86: Add early TPM1.2/TPM2.0 interface support for Secure Launch
On 9/25/20 1:43 AM, Jarkko Sakkinen wrote: > On Thu, Sep 24, 2020 at 10:58:33AM -0400, Ross Philipson wrote: >> From: "Daniel P. Smith" >> >> This commit introduces an abstraction for TPM1.2 and TPM2.0 devices >> above the TPM hardware interface. >> >> Signed-off-by: Daniel P. Smith >> Signed-off-by: Ross Philipson > > This is way, way too PoC. I wonder why there is no RFC tag. > > Please also read section 2 of > > https://www.kernel.org/doc/html/v5.8/process/submitting-patches.html > > You should leverage existing TPM code in a way or another. Refine it so > that it scales for your purpose and then compile it into your thing > (just include the necesary C-files with relative paths). > > How it is now is never going to fly. > > /Jarkko > After attempts to engage in finding alternative approaches, it appears that the only welcomed approach for sending measurements from the compressed kernel would be a major rewrite of the mainline TPM driver to: 1. Abstract out the mainline kernel infrastructure that is used by the driver 2. Find ways to introduce a minimal amount of the equivalent infrastructure into the compressed kernel, to make the driver code reusable within the compressed kernel. This approach would exceed the scope of changes we want to introduce to non-SecureLaunch code to enable direct DRTM launch for the Linux kernel. After careful consideration and discussions with colleagues from the trusted computing community, an alternative has been crafted. We aim to submit a version 2 with the following approach: 1. SecureLaunch will take measurements in the compressed kernel as we do in version 1, but instead of immediately sending them to the TPM, they will be stored in the DRTM TPM event log. 2. When the SecureLaunch module in the mainline kernel comes on line, it can send measurements to the TPM using the mainline TPM driver. While it would be ideal to record measurements at the time they are taken, the mainline kernel is measured alongside the compressed kernel as a single measurement. This means the same measured entity stays in control, prior to execution by any other entity within the system. At a later date, if the TPM maintainers refactor the TPM driver for reuse within the compressed kernel, then the sending of measurements can be revisited. For individuals and distributions that may prefer to record DRTM measurements earlier, the TrenchBoot project will do its best to maintain an external patch to provide that capability to a mainline LTS kernel. V/r, Daniel P. Smith ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH 05/13] x86: Add early TPM1.2/TPM2.0 interface support for Secure Launch
On 9/25/20 1:43 AM, Jarkko Sakkinen wrote: > On Thu, Sep 24, 2020 at 10:58:33AM -0400, Ross Philipson wrote: >> From: "Daniel P. Smith" >> >> This commit introduces an abstraction for TPM1.2 and TPM2.0 devices >> above the TPM hardware interface. >> >> Signed-off-by: Daniel P. Smith >> Signed-off-by: Ross Philipson > > This is way, way too PoC. I wonder why there is no RFC tag. An RFC was sent back in March and we incorporated the feedback we received at that time. > Please also read section 2 of > > https://www.kernel.org/doc/html/v5.8/process/submitting-patches.html > > You should leverage existing TPM code in a way or another. Refine it so > that it scales for your purpose and then compile it into your thing > (just include the necesary C-files with relative paths). We explained during the RFC phase that we took a fair bit of time and a very hard look to see if we could #include sections out the TPM driver but as it is today none of the TPM driver's c files can be included outside of the mainline kernel. If you look at the early boot stub for the compressed kernel you will see that we are interacting with the TPM as the first thing upon leaving the assembly world and entering C. Since we weren't going to be able to get the mainline TPM driver plucked down, we could either 1.) borrow an implementation from a colleague that provides the minimum command strings hard coded in C macros to send measurements to the TPM or 2.) reuse the TPM implementation we wrote for TrenchBoot's AMD Secure Loader (LZ). The former is not well supported and the latter will be getting maintenance under TB. While this is not preferred, we had to weigh this versus trying to convince you and the other TPM driver maintainers on a significant refactoring of the TPM driver. It was elected for the reuse of a clean implementation that can be replaced later if/when the TPM driver was refactored. When we explained this during the RFC and it was not rejected, therefore we carried it forward into this submission. > How it is now is never going to fly. We would gladly work with you and the other TPM maintainers on a refactoring of the TPM driver to separate core logic into standalone files that both the driver and the compressed kernel can share. > /Jarkko > ___ iommu mailing list iommu@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/iommu
Re: [PATCH 00/13] x86: Trenchboot secure dynamic launch Linux kernel support
On 9/25/20 1:30 AM, Jarkko Sakkinen wrote: > On Thu, Sep 24, 2020 at 10:58:28AM -0400, Ross Philipson wrote: >> The Trenchboot project focus on boot security has led to the enabling of >> the Linux kernel to be directly invocable by the x86 Dynamic Launch >> instruction(s) for establishing a Dynamic Root of Trust for Measurement >> (DRTM). The dynamic launch will be initiated by a boot loader with > > What is "the dynamic launch"? Dynamic launch is the term used to reference the event/process of restarting a system without reboot to establish the DRTM. It is defined in the TCG Glossary[1], is discussed in detail in the TCG D-RTM Architecture specification[2], and covered in minimal detail in sections 9.5.5 and 34.2 of the TCG TPM2.0 Architecture specification[3]. [1] https://trustedcomputinggroup.org/wp-content/uploads/TCG-Glossary-V1.1-Rev-1.0.pdf [2] https://trustedcomputinggroup.org/wp-content/uploads/TCG_D-RTM_Architecture_v1-0_Published_06172013.pdf [3] https://trustedcomputinggroup.org/wp-content/uploads/TCG_TPM2_r1p59_Part1_Architecture_pub.pdf >> associated support added to it, for example the first targeted boot >> loader will be GRUB2. An integral part of establishing the DRTM involves >> measuring everything that is intended to be run (kernel image, initrd, >> etc) and everything that will configure that kernel to run (command >> line, boot params, etc) into specific PCRs, the DRTM PCRs (17-22), in >> the TPM. Another key aspect is the dynamic launch is rooted in hardware, >> that is to say the hardware (CPU) is what takes the first measurement >> for the chain of integrity measurements. On Intel this is done using >> the GETSEC instruction provided by Intel's TXT and the SKINIT >> instruction provided by AMD's AMD-V. Information on these technologies >> can be readily found online. This patchset introduces Intel TXT support. > > Why not both Intel and AMD? You should explain this in the cover letter. The work for this is split across different teams with different resourcing levels resulting in one organization working Intel and another working AMD. This then raised the concern over submitting a single patch set developed by two groups pseudo-independently. In this situation the result would be patches being submitted from one organization that had no direct development or testing and therefore could not sign off on a subset of the patches being submitted. > I'd be more motivated to review and test a full all encompassing x86 > solution. It would increase the patch set size but would also give it > a better test coverage, which I think would be a huge plus in such a > complex patch set. We would not disagree with those sentiments but see the previous response about the conflict that exists. >> To enable the kernel to be launched by GETSEC, a stub must be built >> into the setup section of the compressed kernel to handle the specific >> state that the dynamic launch process leaves the BSP in. This is >> analogous to the EFI stub that is found in the same area. Also this stub > > How is it analogous? It is analogous as we used it as the pattern to follow for adding a configurable entry point to the kernel. There was a discussion on this when we published the RFC patches[4]. [4] https://lkml.org/lkml/2020/3/25/982 >> must measure everything that is going to be used as early as possible. >> This stub code and subsequent code must also deal with the specific >> state that the dynamic launch leaves the APs in. > > What is "the specific state"? The details are a bit more than I would prefer to explain here, I would recommend reading section 2.3 and 2.4 of Intel's TXT Software Development Guide[5] for all the details of the state and the prescribed initialization sequence. [5] https://www.intel.com/content/dam/www/public/us/en/documents/guides/intel-txt-software-development-guide.pdf >> A quick note on terminology. The larger open source project itself is >> called Trenchboot, which is hosted on Github (links below). The kernel >> feature enabling the use of the x86 technology is referred to as "Secure >> Launch" within the kernel code. As such the prefixes sl_/SL_ or >> slaunch/SLAUNCH will be seen in the code. The stub code discussed above >> is referred to as the SL stub. > > Is this only for Trenchboot? I'm a bit lost. What is it anyway? TrenchBoot is a meta-project that is working to bring a unified approach to using DRTM across CPU implementations and open source projects. Currently we are working to integrate a dynamic launch preamble (the code that sets up for calling the dynamic launch CPU instruction) in GRUB, building an open AMD Secure Loader that aligns with how Intel's SINIT ACM hands off to an MLE, bring the first directly launchable implementation to Linux (Secure Launch) with Xen being the next directly launchable implementation, providing the u-root project a secure launch initramfs init routine to demonstrate a policy driven measurement and attestation framework