Re: TCPA/Palladium -- likely future implications (Re: dangers ofTCPA/palladium)

2002-08-11 Thread Peter Fairbrother

Adam Back wrote:
[...]
> - It is always the case that targetted people can have hardware
> attacks perpetrated against them.  (Keyboard sniffers placed during
> court authorised break-in as FBI has used in mob case of PGP using
> Mafiosa [1]).

[...]

> [1] "FBI Bugs Keyboard of PGP-Using Alleged Mafioso", 6 Dec 2000,
> slashdot

That was a software keylogger (actually two software keyloggers), not
hardware. 

(IMO Scarfo's lawyers should never have dealt, assuming the evidence was
necessary for a conviction, but the FBI statement about the techniques used
was probably too obfuscated for them - it took me a good week to understand
it. I emailed them, but got no reply.

Incidently, Nicky Scarfo used his father's prison number for the password,
so a well researched directed dictionary attack would have worked anyway.)


The FBI reputedly can (usually, on Windows boxen) now install similar
software keyloggers remotely, without needing to break in.


-- Peter Fairbrother


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



It won't happen here (was Re: TCPA/Palladium -- likely future implications)

2002-08-10 Thread Marcel Popescu

From: "AARG! Anonymous" <[EMAIL PROTECTED]>

> Think about it: this one innocuous little box holding the TPME key could
> ultimately be the root of trust for the entire world.  IMO we should
> spare no expense in guarding it and making sure it is used properly.
> With enough different interest groups keeping watch, we should be able
> to keep it from being used for anything other than its defined purpose.

Now I know the general opinion of AARG, and I can't say I much disagree. But
I want to comment on something else here, which I find to be a common trait
with US citizens: "it can't happen here". The Chinese gov't can do anything
they like, because any citizen who would try to "keep watch" would find
himself shot. What basic law of the universe says that this can't happen in
the US? What exactly will prevent them, 10 years from now, to say
"compelling state interests require that we get to do whatever we want with
the little box"? You already have an official "gov't against 1st ammendment"
policy, from what I've read.

Mark



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: TCPA/Palladium -- likely future implications

2002-08-09 Thread James A. Donald

--
On 9 Aug 2002 at 17:15, AARG! Anonymous wrote:
> to understand it you need a true picture of TCPA rather than the 
> false one which so many cypherpunks have been promoting.

As TCPA is currently vaporware, projections of what it will be, 
and how it will be used are judgments, and are not capable of 
being true or false, though they can be plausible or implausible.

Even with the best will in the world, and I do not think the 
people behind this have the best will in the world, there is an 
inherent conflict between tamper resistance and general purpose 
programmability.  To prevent me from getting at the bits as they 
are sent to my sound card or my video card, the entire computer, 
not just the dongle, has to be somewhat tamper resistant, which is 
going to make the entire computer somewhat less general purpose 
and programmable, thus less useful.

The people behind TCPA might want to do something more evil than 
you say they want to do, if they want to do what you say they want 
to do they might be prevented by law enforcement which wants 
something considerably more far reaching and evil, and if they
want to do it, and law enforcement refrains from reaching out and 
taking hold of their work, they still may be unable to do it for 
technical reasons. 

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 D7ZUyyAS+7CybaH0GT3tHg1AkzcF/LVYQwXbtqgP
 2HBjGwLqIOW1MEoFDnzCH6heRfW1MNGv1jXMIvtwb


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: TCPA/Palladium -- likely future implications

2002-08-09 Thread AARG!Anonymous

I want to follow up on Adam's message because, to be honest, I missed
his point before.  I thought he was bringing up the old claim that these
systems would "give the TCPA root" on your computer.

Instead, Adam is making a new point, which is a good one, but to
understand it you need a true picture of TCPA rather than the false one
which so many cypherpunks have been promoting.  Earlier Adam offered a
proposed definition of TCPA/Palladium's function and purpose:

> "Palladium provides an extensible, general purpose programmable
> dongle-like functionality implemented by an ensemble of hardware and
> software which provides functionality which can, and likely will be
> used to expand centralised control points by OS vendors, Content
> Distrbuters and Governments."

IMO this is total bullshit, political rhetoric that is content-free
compared to the one I offered:

: Allow computers separated on the internet to cooperate and share data
: and computations such that no one can get access to the data outside
: the limitations and rules imposed by the applications.

It seems to me that my definition is far more useful and appropriate in
really understanding what TCPA/Palladium are all about.  Adam, what do
you think?

If we stick to my definition, you will come to understand that the purpose
of TCPA is to allow application writers to create closed spheres of trust,
where the application sets the rules for how the data is handled.  It's
not just DRM, it's Napster and banking and a myriad other applications,
each of which can control its own sensitive data such that no one can
break the rules.

At least, that's the theory.  But Adam points out a weak spot.  Ultimately
applications trust each other because they know that the remote systems
can't be virtualized.  The apps are running on real hardware which has
real protections.  But applications know this because the hardware has
a built-in key which carries a certificate from the manufacturer, who
is called the TPME in TCPA.  As the applications all join hands across
the net, each one shows his cert (in effect) and all know that they are
running on legitimate hardware.

So the weak spot is that anyone who has the TPME key can run a virtualized
TCPA, and no one will be the wiser.  With the TPME key they can create
their own certificate that shows that they have legitimate hardware,
when they actually don't.  Ultimately this lets them run a rogue client
that totally cheats, disobeys all the restrictions, shows the user all
of the data which is supposed to be secret, and no one can tell.

Furthermore, if people did somehow become suspicious about one particular
machine, with access to the TPME key the eavesdroppers can just create
a new virtual TPM and start the fraud all over again.

It's analogous to how someone with Verisign's key could masquerade as
any secure web site they wanted.  But it's worse because TCPA is almost
infinitely more powerful than PKI, so there is going to be much more
temptation to use it and to rely on it.

Of course, this will be inherently somewhat self-limiting as people learn
more about it, and realize that the security provided by TCPA/Palladium,
no matter how good the hardware becomes, will always be limited to
the political factors that guard control of the TPME keys.  (I say
keys because likely more than one company will manufacture TPM's.
Also in TCPA there are two other certifiers: one who certifies the
motherboard and computer design, and the other who certifies that the
board was constructed according to the certified design.  The NSA would
probably have to get all 3 keys, but this wouldn't be that much harder
than getting just one.  And if there are multiple manufacturers then
only 1 key from each of the 3 categories is needed.)

To protect against this, Adam offers various solutions.  One is to do
crypto inside the TCPA boundary.  But that's pointless, because if the
crypto worked, you probably wouldn't need TCPA.  Realistically most of the
TCPA applications can't be cryptographically protected.  "Computing with
encrypted instances" is a fantasy.  That's why we don't have all those
secure applications already.

Another is to use a web of trust to replace or add to the TPME certs.
Here's a hint.  Webs of trust don't work.  Either they require strong
connections, in which case they are too sparse, or they allow weak
connections, in which case they are meaningless and anyone can get in.

I have a couple of suggestions.  One early application for TCPA is in
closed corporate networks.  In that case the company usually buys all
the computers and prepares them before giving them to the employees.
At that time, the company could read out the TPM public key and sign
it with the corporate key.  Then they could use that cert rather than
the TPME cert.  This would protect the company's sensitive data against
eavesdroppers who manage to virtualize their hardware.

For the larger public network, the first thing I would suggest is that
the TPME

TCPA/Palladium -- likely future implications (Re: dangers of TCPA/palladium)

2002-08-09 Thread Adam Back

On Thu, Aug 08, 2002 at 09:15:33PM -0700, Seth David Schoen wrote:
> Back in the Clipper days [...] "how do we know that this
> tamper-resistant chip produced by Mykotronix even implements the
> Clipper spec correctly?".

The picture is related but has some extra wrinkles with the
TCPA/Palladium attestable donglization of CPUs.

- It is always the case that targetted people can have hardware
attacks perpetrated against them.  (Keyboard sniffers placed during
court authorised break-in as FBI has used in mob case of PGP using
Mafiosa [1]).

- In the clipper case people didn't need to worry much if the clipper
chip had malicious deviations from spec, because Clipper had an openly
stated explicit purpose to implement a government backdoor -- there's
no need for NSA to backdoor the explicit backdoor.

But in the TCPA/Palladium case however the hardware tampering risk you
identify is as you say relevant:

- It's difficult for the user to verify hardware.  

- Also: it wouldn't be that hard to manufacture plausibly deniable
implementation "mistakes" that could equate to a backdoor -- eg the
random number generators used to generate the TPM/SCP private device
keys.

However, beyond that there is an even softer target for would-be
backdoorers:

- the TCPA/Palladium's hardware manufacturers endoresment CA keys.

these are the keys to the virtual kingdom formed -- the virtual
kingdom by the closed space within which attested applications and
software agents run.


So specifically let's look at the questions arising:

1. What could a hostile entity(*) do with a copy of a selection of
hardware manufacturer endorsement CA private keys?

( (*) where the hostile entity candidates would be for example be
secret service agencies, law enforcement or "homeland security"
agencies in western countries, RIAA/MPAA in pursuit of their quest to
exercise their desire to jam and DoS peer-to-peer file sharing
networks, the Chinese government, Taiwanese government (they may lots
of equipment right) and so on).

a. Who needs to worry -- who will be targetted?

Who needs to worry about this depends on how overt third-party
ownership of these keys is, and hence the pool of people who would
likely be targetted.  

If it's very covert, it would only be used plausibly deniably and only
for Nat Sec / Homeland Security purposes.  It if becomse overt over
time -- a publicly acknowledged, but supposedly court controlled
affair like Clipper, or even more widely desired by a wide-range of
entities for example: keys made available to RIAA / MPAA so they can
do the hacking they have been pushing for -- well then we all need to
worry.


To analyse the answer to question 1, we first need to think about
question 2:

2. What kinds of TCPA/Palladium integrity depending "trusted"
applications are likely to be built?

Given the powerful (though balance of control changing) new remotely
attestable security features provided by TCPA/Palladium, all kinds of
remote services become possible, for example (though all to the extent
of hardware tamper-resistance and belief that your attacker doesn't
have access to a hardware endorsement CA private key):

- general Application Service Providers (ASPs) that you don't have to
trust to read your data

- less traceable peer-to-peer applications

- DRM applications that make a general purpose computer secure against
BORA (Break Once Run Anywhere), though of course not secure against
ROCA (Rip Once Copy Everywhere) -- which will surely continue to
happen with ripping shifting to hardware hackers.

- general purpose unreadable sandboxes to run general purpose
CPU-for-rent computing farms for hire, where the sender knows you
can't read his code, you can't read his input data, or his output
data, or tamper with the computation.

- file-sharing while robustly hiding knowledge and traceability of
content even to the node serving it -- previously research question,
now easy coding problem with efficient

- anonymous remailers where you have more assurance that a given node
is not logging and analysing the traffic being mixed by it


But of course all of these distributed applications, positive and
negative (depending on your view point), are limited in their
assurance of their non-cryptographically assured aspects:

- to the tamper resistance of the device

- to the extent of the users confidence that an entity hostile to them
doesn't have the endorsement CA's private key for the respective
remote servers implementing the network application they are relying
on


and a follow-on question to question 2:

3. Will any software companies still aim for cryptographic assurance?

(cryptographic assurance means you don't need to trust someone not to
reverse engineer the application -- ie you can't read the data because
it is encrypted with a key derived from a password that is only stored
in the users head).

The extended platform allows you to build new classes of applications
which aren't currently buildable to cryptographic levels