Re: dangers of TCPA/palladium

2002-08-14 Thread Brian A. LaMacchia

Adam Shostack [EMAIL PROTECTED] wrote:
 On Mon, Aug 12, 2002 at 12:38:42AM -0700, Brian A. LaMacchia wrote:
 There are two parts to answering the first question:

 1) People (many people, the more the merrier) need to understand the
 code and what it does, and thus be in a position to be able to make
 an informed decision about whether or not they trust it.
 2) People reviewing the code, finding security flaws, and then
 reporting them so that we can fix them

 These are two very different things.  I don't think that anyone
 should count on the goodwill of the general populace to make their
 code proveably secure. I think that paying people who are experts at
 securing code to find exploits in it must be part of the development
 process.

 How are these different?  If I'm understanding the code to decide if I
 trust it (item 1), it seems to me that I must do at least 2A and 2B:
 2C is optional :)

 Or are you saying that (2) is done by internal folks, and thus is a
 smaller set than (1)?

Yeah, I wasn't very clear here, was I?  What I was trying to say was that
there's a difference between understanding how a system behaves technically
(and deciding whether that behavior is correct from a technical perspective)
and understanding how a system behaves from a policy perspective (e.g.
social process  impact).  Those are two completely different questions.  2)
is all about verifying that Palladium hardware and software components
technically operates as it is spec'd to.  1) is about the larger issue of
how Palladium systems interact with service providers (CAs, TTPs), what
processes one goes through to secure PII, etc.  The two groups of people
looking at 1) and 2) have non-zero intersection but are not equal.  And,
just to be clear, 2) is *not* done only by internal folks, but I expect that
the size of the set of people competent to do 2) is significantly smaller
than the size of the set of people who need to think about 1). :-)

--bal


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: dangers of TCPA/palladium

2002-08-12 Thread Brian A. LaMacchia

Seth David Schoen [EMAIL PROTECTED] wrote:
 R. Hirschfeld writes:

 From: Peter N. Biddle [EMAIL PROTECTED]
 Date: Mon, 5 Aug 2002 16:35:46 -0700

 You can know this to be true because the
 TOR will be made available for review and thus you can read the
 source and decide for yourself if it behaves this way.

 This may be a silly question, but how do you know that the source
 code provided really describes the binary?

 It seems too much to hope for that if you compile the source code
 then the hash of the resulting binary will be the same, as the
 binary would seem to depend somewhat on the compiler and the
 hardware you compile on.

 I heard a suggestion that Microsoft could develop (for this purpose)
 a provably-correct minimal compiler which always produced identical
 output for any given input.  If you believe the proof of correctness,
 then you can trust the compiler; the compiler, in turn, should produce
 precisely the same nub when you run it on Microsoft's source code as
 it did when Microsoft ran it on Microsoft's source code (and you can
 check the nub's hash, just as the SCP can).

We are talking internally about doing exact what is suggested here. We have
also
thought about working with someone to offer a free compilation service in
conjunction with the compiler, so you could compare your results on your
compiler with someone elses results on theirs - they could provide you with
a benchmark, so to speak. This directly bears on how people assert the
trustworthiness of their applications, so whether or not we do it, for
certain applications some kind of system is bound to spring up.

 I don't know for sure whether Microsoft is going to do this, or is
 even capable of doing this.  It would be a cool idea.  It also isn't
 sufficient to address all questions about deliberate malfeasance.
 Back in the Clipper days, one question about Clipper's security was
 how do we know the Clipper spec is secure? (and the answer actually
 turned out to be it's not).  But a different question was how do
 we know that this tamper-resistant chip produced by Mykotronix even
 implements the Clipper spec correctly?.

 The corresponding questions in Palladium are how do we know that the
 Palladium specs (and Microsoft's nub implementation) are secure? and
 how do we know that this tamper-resistant chip produced by a
 Microsoft contractor even implements the Palladium specs correctly?.

There are two parts to answering the first question:

1) People (many people, the more the merrier) need to understand the code
and what it does, and thus be in a position to be able to make an informed
decision about whether or not they trust it.
2) People reviewing the code, finding security flaws, and then reporting
them so that we can fix them

These are two very different things.  I don't think that anyone should count
on the goodwill of the general populace to make their code proveably secure.
I think that paying people who are experts at securing code to find exploits
in it must be part of the development process.

We're also making some changes to our internal engineering processed that we
hope will help us identify security bugs earlier in the development cycle.
For example, we are generating our header files from specs (yes, actualy
specs!). Any change to an interface must first be made to the spec, the spec
must be reviewed, and only then will the change be propogated into the code,
at which point the code will be code reviewed. This doesn't fix coding bugs;
it does however aim
squarely at fixing design and architectural flaws.  We will make the specs 
software widely available.  (That doesn't make the code more secure, of
course, but it does provide a basis for analysis  comparison.)

--bal

 In that sense, TCPA or Palladium can _reduce_ the size of the hardware
 trust problem (you only have to trust a small number of components,
 such as the SCP), and nearly eliminate the software trust problem, but
 you still don't have an independent means of verifying that the logic
 in the tamper-resistant chip performs according to its specifications.
 (In fact, publishing the plans for the chip would hardly help there.)

 This is a sobering thought, and it's consistent with ordinary security
 practice, where security engineers try to _reduce_ the number of
 trusted system components.  They do not assume that they can eliminate
 trusted components entirely.  In fact, any demonstration of the
 effectiveness of a security system must make some assumptions,
 explicit or implicit.  As in other reasoning, when the assumptions are
 undermined, the demonstration may go astray.

 The chip fabricator can still -- for example -- find a covert channel
 within a protocol supported by the chip, and use that covert channel
 to leak your keys, or to leak your serial number, or to accept secret,
 undocumented commands.

 This problem is actually not any _worse_ in Palladium than it is in
 existing hardware.  I am typing this in an 

Re: dangers of TCPA/palladium

2002-08-11 Thread Ben Laurie

AARG!Anonymous wrote:
 Adam Back writes:
 
 
- Palladium is a proposed OS feature-set based on the TCPA hardware
(Microsoft)
 
 
 Actually there seem to be some hardware differences between TCPA and
 Palladium.  TCPA relies on a TPM, while Palladium uses some kind of
 new CPU mode.  Palladium also includes some secure memory, a concept
 which does not exist in TCPA.

This is correct. Palladium has ring -1, and memory that is only 
accessible to ring -1 (or I/O initiated by ring -1).

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

Available for contract work.

There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit. - Robert Woodruff


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: dangers of TCPA/palladium

2002-08-09 Thread Seth David Schoen

R. Hirschfeld writes:

  From: Peter N. Biddle [EMAIL PROTECTED]
  Date: Mon, 5 Aug 2002 16:35:46 -0700
 
  You can know this to be true because the
  TOR will be made available for review and thus you can read the source and
  decide for yourself if it behaves this way.
 
 This may be a silly question, but how do you know that the source code
 provided really describes the binary?
 
 It seems too much to hope for that if you compile the source code then
 the hash of the resulting binary will be the same, as the binary would
 seem to depend somewhat on the compiler and the hardware you compile
 on.

I heard a suggestion that Microsoft could develop (for this purpose)
a provably-correct minimal compiler which always produced identical
output for any given input.  If you believe the proof of correctness,
then you can trust the compiler; the compiler, in turn, should produce
precisely the same nub when you run it on Microsoft's source code as
it did when Microsoft ran it on Microsoft's source code (and you can
check the nub's hash, just as the SCP can).

I don't know for sure whether Microsoft is going to do this, or is
even capable of doing this.  It would be a cool idea.  It also isn't
sufficient to address all questions about deliberate malfeasance.  Back
in the Clipper days, one question about Clipper's security was how do
we know the Clipper spec is secure? (and the answer actually turned
out to be it's not).  But a different question was how do we know
that this tamper-resistant chip produced by Mykotronix even implements
the Clipper spec correctly?.

The corresponding questions in Palladium are how do we know that the
Palladium specs (and Microsoft's nub implementation) are secure? and
how do we know that this tamper-resistant chip produced by a
Microsoft contractor even implements the Palladium specs correctly?.

In that sense, TCPA or Palladium can _reduce_ the size of the hardware
trust problem (you only have to trust a small number of components,
such as the SCP), and nearly eliminate the software trust problem, but
you still don't have an independent means of verifying that the logic
in the tamper-resistant chip performs according to its specifications.
(In fact, publishing the plans for the chip would hardly help there.)

This is a sobering thought, and it's consistent with ordinary security
practice, where security engineers try to _reduce_ the number of
trusted system components.  They do not assume that they can eliminate
trusted components entirely.  In fact, any demonstration of the
effectiveness of a security system must make some assumptions,
explicit or implicit.  As in other reasoning, when the assumptions are
undermined, the demonstration may go astray.

The chip fabricator can still -- for example -- find a covert channel
within a protocol supported by the chip, and use that covert channel
to leak your keys, or to leak your serial number, or to accept secret,
undocumented commands.

This problem is actually not any _worse_ in Palladium than it is in
existing hardware.  I am typing this in an ssh window on a Mac laptop.
I can read the MacSSH source code (my client) and the OpenSSH source
code (the server listening at the other end), and I can read specs for
most of the software and most of the parts which make up this laptop,
but I can't independently verify that they actually implement the
specs, the whole specs, and nothing but the specs.

As Ken Thompson pointed out in Reflections on Trusting Trust, the
opportunities for introducing backdoors in hardware or software run
deep, and can conceivably survive multiple generations, as though they
were viruses capable of causing Lamarckian mutations which cause the
cells of future generations to produce fresh virus copies.  Even if I
have a Motorola databook for the CPU in this iBook, I won't know
whether the microcode inside that CPU is compliant with the spec, or
whether it might contain back doors which can be used against me
somehow.  It's technically conceivable that the CPU microcode on this
machine understands MacOS, ssh, vt100, and vi, and is programmed to
detect BWA HA HA! arguments about trusted computing and invisibly
insert errors into them.  I would never know.

This problem exists with or without Palladium.  Palladium would
provide a new place where a particular vendor could put
security-critical (trusted) logic without direct end-user
accountability.  But there are already several such places in the
PC.  I don't think that trust-bootstrapping problem can ever be
overcome, although maybe it's possible to chip away at it.  There is
a much larger conversation about trusted computing in general, which
we ought to be having:

What would make you want to enter sensitive information into a
complicated device, built by people you don't know, which you can't
take apart under a microscope?

That device doesn't have to be a computer.

-- 
Seth David Schoen [EMAIL PROTECTED] | Reading is a right, not a feature!
 

Re: dangers of TCPA/palladium

2002-08-08 Thread R. Hirschfeld

 From: Peter N. Biddle [EMAIL PROTECTED]
 Date: Mon, 5 Aug 2002 16:35:46 -0700

 You can know this to be true because the
 TOR will be made available for review and thus you can read the source and
 decide for yourself if it behaves this way.

This may be a silly question, but how do you know that the source code
provided really describes the binary?

It seems too much to hope for that if you compile the source code then
the hash of the resulting binary will be the same, as the binary would
seem to depend somewhat on the compiler and the hardware you compile
on.  But this means that you also can't just use the TOR you compiled,
as you then won't be able to unseal any data sealed with the standard
TOR.  Or do I misunderstand how this all works (very likely the case)?


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]