I was suspecting that as DRM at least appears to one of the main
motivators (along side trojan/malware protection) for trustworthy
computing that probably you will not be able to put the TPM into debug
mode (ie manipulate code without affecting the hash attested in debug
mode).  Ability to do so breaks DRM.

Also bear in mind the vista model where it has been described that
inserting an unsigned device driver into the kernel will disable some
media playback (requiring DRM).  And also the secure (encrypted) path
between trusted agent and video/audio card, and between video/audio
card and monitor/speakers.  The HDMI spec has these features, and you
can already buy HDMI cards and monitors (though I dont know if they
have the encryption features implemented/enabled).

I think generally full user control model will not be viewed
compatible.  Ie there will be a direct conflict between user ability
to debug attested apps and DRM.

So then enters the possibility to debug all apps except special ones
flagged as DRM, but if that technical ability is there, you wont have
wait long for it to be used for all things: file formats locked to
editors, per processor encrypted binaries, rented by the hour software
you cant debug or inspect memory space of etc.

I think the current CPUs / memory managers do not have the ring -1 /
curtained memory features, but already a year ago or more Intel and
AMD were talking about these features.  So its possible the for
example hypervisor extra virtualization functionality in recent
processors ties with those features, and is already delivered?  Anyone
know?

The device driver signing thing is clearly bypassable without a TPM,
and we know TPMs are not widely available at present.  ("All" that is
required is to disable or nop out the driver signature verification in
the OS; or replace the CA or cert it is verified against with your own
and sign your own drivers).  How long until that OS binary patch is
made?

Adam

On Tue, Oct 10, 2006 at 12:56:07PM +0100, Brian Gladman wrote:
> I haven't been keeping up to date with this trusted computing stuff over
> the last two years but when I was last involved it was accepted that it
> was vital that the owner of a machine (not necessarily the user) should
> be able to do the sort of things you suggest and also be able to exert
> ultimate control over how a computing system presents itself to the
> outside world.
> 
> Only in this way can we undermine the treacherous computing model of
> "trusted machines with untrusted owners" and replace it with a model in
> which "trust in this machine requires trust in its owner" on which real
> information security ultimately depends (I might add that even this
> model has serious potential problems when most machine owners do not
> understand security).
> 
> Does anyone know the current state of affairs on this issue within the
> Trusted Computing Group (and the marketed products of its members)?
>
> Adam Back wrote:
> > So the part about being able to detect viruses, trojans and attest
> > them between client-server apps that the client and server have a
> > mutual interest to secure is fine and good.
> > 
> > The bad part is that the user is not given control to modify the hash
> > and attest as if it were the original so that he can insert his own
> > code, debug, modify etc.
> > 
> > (All that is needed is a debug option in the BIOS to do this that only
> > the user can change, via BIOS setup.)

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to