On Saturday 10 February 2007 22:29, Tim Thornton wrote:
> [ lots of interesting material ]

Having read /some/ of this now, it might useful to repeat in back to help 
others in the thread understand the basic ideas, or to allow me to be 
corrected if I've misunderstood :-). (The DRM use case will stay 
controversial, but I suspect understanding what's going on is useful.) 

In a trusted computing scenario, you don't actually own one computer, you own 
two in a single box - it just looks like one. (well, given the amount of tech 
inside a PC these days, its more a minimum of two computers in the box, a GPU 
can be called a computer as well)

 +-----+      +-------------------+
 | TPM |<---->|   Main computer   |
 |     |      | (running some OS) |
 +-----+      +-------------------+

The TPM by definition of being a computer has its own CPU, local storage,
and so on. Part of it's design is that at manufacture it is given it's own
private/public key pair.

At this stage, this is little different (conceptually) from 2 computers
connected over a network by an ssh link. The difference is that the
connection is significantly harder to snoop.

However, in the way it's used, it more resembles the way SSL - ie https for 
those unfamiliar. With SSL there's two modes:
   * Trusted & secure
   * Untrusted & secure

In both scenarios you have exchange of keys in order to set up a session key 
for allowing you to be happy with sending your credit card details over the 
network (among many other uses). This is what I mean by secure. However you 
can have a secure link directly to someone pretending to be your bank, so you 
don't know if the link is trusted.

Well, in SSL/TLS/HTTPS (take your pick, the principles are the same), you 
essentially get your public key signed by a trusted third party. These 
trusted third parties include Verisign, Thawte [1] etc.

   [1] Founded by Mark Shuttleworth, which is where he made his fortune,
       and is the reason Ubuntu exists today...

ie You can either run a SSL/TLS enabled webserver whose keys have been signed 
by one of these third parties, or not.

ie if you consider the two computers above by the following metaphor:
   * The TPM as an HTTPS website
   * The Main computer as a browser

Because the keys in the TPM have been signed by someone else, that browser can
check to see if the TPM is a real TPM or not.

Now the problem with this approach however is that it introduces potential
bottlenecks into the system. As a result, there is another step you can add
in. Given this basic chain - can you make it such that the main computer can 
verify the TPM without talking the third party all the time?

Well, if you get the TPM to talk (via the main computer in this case hopefully 
obviously) to another third party you can do this:

   * The TPM authenticates itself to this other third party

   * It generates a special key (DAA) which the third party then signs,
     giving the TPM a certificate. It can sign this using a private
     key and publish the public key. Let's call that pubic key "PK".
     Applications can either download PK on demand or even compile it
     into their code. This includes open source apps because it's not
     a secret.

   * Any one application who wishes to authenticate any TPM then does
     this:
      * It essentially asks the TPM to sign something using this key
        (DAA), and also provides the certificate as signed by the third
        party. Since the PK is public, the application can verify the
        that the thing just signed by the TPM is valid.

Again, whilst that may sound relatively esoteric, it's actually very much the 
same technique as using PGP or GPG for email. You have public/private keys. 
You get your public key signed by someone. The slight difference (I think) is 
that recipients can be given another public key to use to verify the sender.

As a result, this makes it clearly possible to create a "rogue" TPM (including 
virtualised ones) but people can tell the difference.

Probably the weakest link in the chain here is the DAA's public certificate,
but then that's why revocation gets built in as well. The other obvious weak
point is where the TPM's are originally endorsed, since to be useful it needs
to be networked, and software bugs are easier to find/exploit than cracking a
large address space.

To put this into context, your computer can do the equivalent of connecting at 
startup to a machine only you own, and only you have access to. This machine 
can be used to check the integrity of your system, and unlock secrets on the 
system. That machine cannot be accessed directly by others which gives you a 
level of confidence in this process.

Ignoring the DRM usecase or restricting your computer scenarios, having a 
secure location for helping check system integrity and protecting the 
contents of your harddrive, is useful.

Clearly the same technology can be used by an operating system that wishes to 
prevent you from (eg) replacing video drivers or audio drivers, and that same 
operating system can also enforce whatever restrictions it likes. When that 
operating system starts up it could use the TPM to check its own integrity 
and decide to shut down functionality or simply not start up based on the 
results of this check, but it's not inherently caused by the TPM module or 
the TCPA approach.

After all, package installers and email clients do similar things. A package 
installer can choose not to install a package if its MD5 fails or if its 
signed if the signature is bogus. Similarly some email clients act very 
differently depending on whether the email has been signed and whether the 
signature is verifiable.

So whilst TCPA and a TPM can be used to implement "treacherous computing" it's 
like any other technology - it has good/sensible uses too.

Given how many people cried foul over XP's activation (which if extended could 
be used for the DAA scenario), and the fact that some installers for XP don't 
require activation, I'd be surprised if at a later point in time anyone 
implemented a system that required activation as a key point.

(That said, it's entirely practical in a games console environment, since you 
could do that activation at manufacture)

However there's other interesting uses too - it potentially enables widespread 
use of PGP/GPG with good quality keys for email (among other things). This 
increases the likelihood at some point in time for people to use TPMs as the 
basis for encrypting all their personal communications (as advocated by some) 
and being able to expect the recipient to be able to decrypt the 
communications. 

Combine this with OpenID and you probably have some very interesting, and 
positive for the user, scenarios.

Anyway, thanks for the pointers Tim, I'll keep on pondering them, and feel 
free to point out where I've oversimplified to the point of misinforming or 
simply misunderstood some points.

Regards,


Michael.
--
Kamaelia Project Lead
http://kamaelia.sourceforge.net/Home
Senior Research Engineer, BBC Research

[all opinions above are my own, no one elses :) ]
-
Sent via the backstage.bbc.co.uk discussion group.  To unsubscribe, please 
visit http://backstage.bbc.co.uk/archives/2005/01/mailing_list.html.  
Unofficial list archive: http://www.mail-archive.com/backstage@lists.bbc.co.uk/

Reply via email to