On 01/10/2018 11:45 AM, Laszlo Ersek wrote:
On 01/10/18 16:19, Marc-André Lureau wrote:
Hi
----- Original Message -----
BTW, from the "TCG PC Client Platform TPM Profile (PTP)
Specification", it seems like the FIFO (TIS) interface is hard-coded
*in the spec* at FED4_0000h FED4_4FFFh. So we don't even have
to make that dynamic.
Regarding CRB (as an alternative to TIS+Cancel), I'm trying to wrap
my brain around the exact resources that the CRB interface requries.
Marc-André, can you summarize those?
The device is a relatively simple MMIO-only device on the sysbus:
https://github.com/stefanberger/qemu-tpm/commit/2f9d06f93b285d4b39966a80867584c487035db9#diff-1ef22a0d46031cf2701a185aed8ae40eR282
The region is registered at the same address as TIS (it's not entirely
clear from the spec it is supposed to be there, but my laptop tpm use
the same). And it uses a size of 0x1000, although it's also unclear to
me what should be the size of the command buffer (that size can also
be defined at run-time now, iirc, I should adapt the code).
Thank you -- so the "immediate" register block is in MMIO space, and
(apparently) we can hard-code its physical address too.
My question is if we need to allocate guest RAM in addition to the
register block, for the command buffer(s) that will transmit the
requests/responses. I see the code you quote above says,
+ /* allocate ram in bios instead? */
+ memory_region_add_subregion(get_system_memory(),
+ TPM_CRB_ADDR_BASE + sizeof(struct crb_regs), &s->cmdmem);
... and AFAICS your commit message poses the exact same question :)
Option 1: If we have enough room in MMIO space above the register block
at 0xFED40000, then we could simply dump the CRB there too.
Option 2: If not (or we want to avoid Option 1 for another reason), then
the linker/loader script has to make the guest fw allocate RAM, write
the allocation address to the TPM2 table with an ADD_POINTER command,
and write the address back to QEMU with a WRITE_POINTER command. Is my
understanding correct?
I wonder why we'd want to bother with Option 2, since we have to place
the register block at a fixed MMIO address anyway.
(My understanding is that the guest has to populate the CRB, and then
kick the hypervisor, so at least the register used for kicking must be
in MMIO (or IO) space. And firmware cannot allocate MMIO or IO space
(for platform devices). Thus, the register block must reside at a
QEMU-determined GPA. Once we do that, why bother about RAM allocation?)
My experiments so far running some Windows tests indicate that for
TPM2, CRB+UEFI is required (and I managed to get an ovmf build with
TPM2 support).
Awesome!
A few test failed, it seems the "Physical Presence Interface" (PPI) is
also required.
Required for what goal, exactly?
I think that ACPI interface allows to run TPM commands during reboot,
by having the firmware taking care of the security aspects.
Ugh :/ I mentioned those features in my earlier write-up, under points
(2f2b) and (2f2c). I'm very unhappy about them. They are a *huge* mess
for OVMF.
- They would require including (at least a large part of) the
Tcg2Smm/Tcg2Smm.inf driver, with all the complications I described
earlier as counter-arguments,
- they'd require including the MemoryOverwriteControl/TcgMor.inf driver,
- and they'd require some real difficult platform code in OVMF (e.g.
PEI-phase access to non-volatile UEFI variables, which I've by now
failed to upstream twice; PEI-phase access to all RAM; and more).
My personal opinion is that we should determine what goals require what
TPM features, and then we should aim at a minimal set. If I understand
correctly, PCRs and measurements already work (although the patches are
not upstream yet) -- is that correct?
Personally I think the SSDT/_DSM-based features (TCG Hardware
Information, TCG Memory Clear Interface, TCG Physical Presence
Interface) are very much out of scope for "TPM Enablement".
I think that's what Stefan is working on for Seabios and the safe
memory region (sorry I haven't read the whole discussion, as I am not
working on TPM atm)
Yeah, with e.g. the "TCG Memory Clear Interface" feature pulled into the
context -- from the "Platform Reset Attack Mitigation Specification" --,
I do understand Stefan's question. Said feature is about the OS setting
a flag in NVRAM, for the firmware to act upon, at next boot. "Saving a
few bytes across a reboot" maps to that.
I just posted the patches enabling a virtual memory device that helps
save these few bytes across a reboot. I chose the same address as EDK2
does, 0xffff0000, in the hope that this address can be reserved for this
purpose. It would be enabled for TPM TIS and the CRB through a simple
function call. I think it should be part of TPM enablement, at least to
have this device, since it adds 256 bytes that would need to be saved
for VM suspend. And I would like to get to support suspend/resume with
TPM TIS and external device, so it should be there before we do that.
(And, as far as I understand this spec, it tells traditional BIOS
implementors, "do whatever you want for implementing this NVRAM thingy",
while to UEFI implementors, it says, "use exactly this and that
non-volatile UEFI variable". Given this, I don't know how much
commonality would be possible between SeaBIOS and OVMF.)
Similarly, about "TCG Physical Presence Interface" -- defined in the TCG
Physical Presence Interface Specification --, I had written, "The OS can
queue TPM operations (?) that require Physical Presence, and at next
boot, [the firmware] would have to dispatch those pending operations."
That "queueing" maps to the same question (and NVRAM) again, yes.
The spec describes the ACPI interface but not the layout of the shared
memory between ACPI and firmware. This is not a problem if the vendor of
the firmware supplies ACPI code and firmware code, which they supposedly
do. In QEMU case it's a bit different. I of course looked at EDK2 and
adapted my ACPI code (and SeaBIOS) code to at least support the same
layout of the shared memory, hoping that this would enable EDK2 C code.
Not sure what is better, following their layout or invent my own (and be
incompatible on purpose...)
Again, I'm unclear about any higher level goals / requirements here, but
I think these "extras" from the Trusted Computing Group are way beyond
TPM enablement.
See above why I think we should at least have the virtual memory device...
Thanks
Laszlo
_______________________________________________
SeaBIOS mailing list
SeaBIOS@seabios.org
https://mail.coreboot.org/mailman/listinfo/seabios