Hello Laszlo, Thanks a lot for the great summary.
On 07/14/2017 08:04 PM, Laszlo Ersek wrote: [snip] > > Here I should mention some ACPI and hardware aspects. Under TPM1 > (whose ACPI table was called "TCPA"), the TPM events (measurements > I think) were logged in a reserved memory area described by the > TCPA table. Under TPM2, the "TPM2" ACPI table does no such thing, > it only helps identify the communication characteristics of the > device, and the event log itself is accessible to the OS boot > loader via the EFI_TCG2_PROTOCOL. > > (If you are curious how a legacy BIOS boot loader is supposed to > read the event log from a TPM2-only device (no "TCPA" table): I > don't have the slightest clue.) > The latest "TCG ACPI Specification" draft from February 27, 2017 mentions that the TPM2 table contains the LAML and LASA fields for the TPM event logs memory area. But as Stefan pointed out this is just a draft, and has this disclaimer: "Work in Progress: This document is an intermediate draft for comment only and is subject to change without notice. Readers should not design products based on this document." So I think that will be supported in the future (if the draft doesn't change and is published). > I'm not sure about the exact characteristics of the virtual TPM > that Stefan's swtpm project: > > https://github.com/stefanberger/swtpm > > combined with Amarnath's pending QEMU patches: > > > http://email@example.com > > will expose to the guest. What I do know is that the current QEMU > solution, which mostly forwards a physical (host) TPM to the guest, > produces a "TPM2" ACPI table if said host TPM device is TPM2. The > "TPM2" table is exposed to the guest OS with OVMF's help, and has > the following fields: > > - address of control area: zero > - start method: 6 (TIS plus Cancel) > - platform specific params: none. > > This implies that neither ACPI activation (method 2) nor Command > Response Buffer activation (method 7) nor a combination of these > two (method 8) is available in QEMU. > Even when QEMU always exposes a start method 6 (TIS + cancel) in its TPM2 table to the guest, pass-through works when the host TPM advertises a different start method. For example I've a laptop with an Intel PTT fTPM and the start method is 2, but I'm able to access the host TPM2 from the guest using the Linux tpm_tis driver. > > In brief, by not including these two modules, we avoid a "TPM2" > ACPI table duplication. We also turn off the Memory Overwrite > Request and Physical Presence Interface features -- which are both > optional, as far as I can see, and very messy for OVMF's platform > code. > Agreed. > (3) Drivers (and features) that are *not in edk2/SecurityPkg/Tcg: > > The Intel whitepaper discusses (and Peter also mentioned earlier) > "dTPM" versus "fTPM". > > "dTPM" is basically TPM provided in publicly specified hardware, > where the firmware can offer support, such as EFI_TCG2_PROTOCOL, but > the OS can also directly drive the hardware. This is what QEMU > offers with the TIS+Cancel start method (value 6). (The "Command > Response Buffer" start method (value 7) would also qualify as > "dTPM"). When the platform provides "dTPM", the _DSM method > described above *may* be offered, but it is not required. > > "fTPM" is where the hardware is completely hidden from the OS, and > is implemented fully in firmware. The corresponding start method > values are 2 ("ACPI") and 8 ("ACPI with CRB"). In this case, the > _DSM method is *required*. > > To my understanding, edk2 contains no "fTPM" implementation. The > in-tree drivers recognize hardware that describes itself as > TIS+Cancel (6) or CRB (7). Pure ACPI variants are neither recognized > nor offered. > That's my understanding as well. I see that the Valley View 2 / Minnowboard Max platform pkg (Vlv2TbltDevicePkg) as references to fTPM and a FTPM_ENABLE var to enable it, but IIUC the fTPM driver is distributed as proprietary binary files: https://firmware.intel.com/projects/minnowboard-max > I think TIS+Cancel / dTPM is the best match: the emulated TPM has to > be implemented in virtual hardware (not just faked within the guest, > in RAM), so that QEMU can secure the sensitive stuff from guest > kernel level access. > Agreed. Best regards, -- Javier Martinez Canillas Software Engineer - Desktop Hardware Enablement Red Hat