Hi Eleanor. tl;dr: Today we bootstrap from the TPM.

"To have a secure channel between two processes/compartments (in this case,
the CPU of the hosted machine and the remote,
non-service-provider-controlled system), they must share a secret."

This is a good question since it's not necessarily clear. Let's call the
untrusted host H and a local management system M. We can provide H a
non-secret boot image that contains M's public key. That will be the only
authorized key that can connect to H.

How does M know its talking to a valid H, since it's by definition
untrusted?

Here's were we go into trusted computing land:
The host H will have a trusted platform module (TPM). When H boots up, it
will measure all software state into platform control registers (PCRs) in
the TPM. See Intel Trusted Execution Technology (TXT) for more info how
this works.

The TPM will have a public key, which M can verify with a certificate chain
through the TPM manufacturer and a root CA. M can then engage in an
attestation protocol with H to prove that H's TPM knows the corresponding
private key. M will also obtain signed PCR contents, which it can validate.

If M trusts H's TPM, it will believe it is talking to a system which booted
with a specific, unmodified software configuration and will only accept
connections from M's public key. The promise of TXT is that if malware
modifies the boot image, boot parameters, BIOS, SINIT, etc, then different
values will be measured and attestation will fail.

What if the TPM is compromised?

Then an attacker can forge measurements and trick M into talking to a
malicious system. There are some known potential TPM risks, but the bar is
significantly higher than where it is today. Regardless, eliminating the
dependency on a TPM is an active area of research.

Is attestation the end of the story?

No, attestation is necessary but not sufficient. Even if you attest the
system, software state is still vulnerable while in memory and on the bus.
Think DMA, cold boot, NV-DIMMs, bus analyzers, etc. This is why we're fully
encrypting data in the CPU before writing it to main memory.

There's also a risk since the kernel and drivers are written trusting
physical devices in the system. You also need to lock down all the software
interfaces from the CPU to the rest of the physical host.

On Fri, Jun 21, 2013 at 1:32 PM, Eleanor Saitta <e...@dymaxion.org> wrote:
>
> To have a secure channel between two processes/compartments (in this
> case, the CPU of the hosted machine and the remote,
> non-service-provider-controlled system), they must share a secret.
> Just encrypting local system memory with a key generated on the CPU
> doesn't permit secure communication - e.g., you have no way of getting
> data in and out of the compartment.  Doing computation on known inputs
> where trojaned hardware can read both the input data and the code
> isn't useful, because the work can just be done in parallel by your
> adversary.  So, to provide useful benefit, I assume you must have a
> method for secret-sharing between processes/compartments.  What is it?
>
>
--
Too many emails? Unsubscribe, change to digest, or change password by emailing 
moderator at compa...@stanford.edu or changing your settings at 
https://mailman.stanford.edu/mailman/listinfo/liberationtech

Reply via email to