Re: defending against evil in all layers of hardware and software

2008-04-29 Thread Jonathan Thornburg
On Tue, 29 Apr 2008, Ivan Krsti?~G wrote:
> On Apr 28, 2008, at 12:58 PM, John Denker wrote:
> > Of course we should insist on an open-source boot ROM code:
> > The boot ROM should check the pgp signature of each PCI card's
> > BIOS code before letting it get control.  And then it should
> > check the pgp signature of the operating system before booting
> > it.  I don't know of any machine that actually does this
> 
> 
> The OLPC XO-1 laptop has an open-source bootloader (Open Firmware) which
> checks the operating system signature before passing control to it.

If the bootloader is running on malicious hardware I don't think that
test can be trusted. :(

-- Jonathan Thornburg (remove -animal to reply) <[EMAIL PROTECTED]>
   School of Mathematics, U of Southampton, England
   "C++ is to programming as sex is to reproduction. Better ways might
technically exist but they're not nearly as much fun." -- Nikolai Irgens



Re: defending against evil in all layers of hardware and software

2008-04-29 Thread Perry E. Metzger

Stephan Neuhaus <[EMAIL PROTECTED]> writes:
> On Apr 28, 2008, at 23:56, Perry E. Metzger wrote:
>
>> If you have a rotten apple engineer, he will be able to hide what he's
>> trying to do and make it look completely legit. If he's really good,
>> it may not be possible to catch what he's done EVEN IN PRINCIPLE.
>
> Fred Cohen proved in 1984 in his "Computer Viruses, Theory and
> Experiments"[1] that "Program P is a virus" is undecidable.

He needn't have bothered. All non-trivial properties of programs
are undecidable. Rice's Theorem, you know. Such a proof is one line --
you need merely assert that "X is a virus" is a non-trivial property
(that is, a property that is only true of some programs).

Perry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: defending against evil in all layers of hardware and software

2008-04-29 Thread Ivan Krstić

On Apr 28, 2008, at 12:58 PM, John Denker wrote:

Of course we should insist on an open-source boot ROM code:
The boot ROM should check the pgp signature of each PCI card's
BIOS code before letting it get control.  And then it should
check the pgp signature of the operating system before booting
it.  I don't know of any machine that actually does this



The OLPC XO-1 laptop has an open-source bootloader (Open Firmware)  
which checks the operating system signature before passing control to  
it.


--
Ivan Krstić <[EMAIL PROTECTED]> | http://radian.org
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: defending against evil in all layers of hardware and software

2008-04-29 Thread Ivan Krstić

On Apr 28, 2008, at 2:56 PM, Perry E. Metzger wrote:

I'm pretty sure we can defend against this sort of thing a lot of the
time (by no means all) if it is done by quite ordinary criminals. If
it is done by really good people, I have very serious doubts.



I think you just described all of security.

--
Ivan Krstić <[EMAIL PROTECTED]> | http://radian.org

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: defending against evil in all layers of hardware and software

2008-04-29 Thread Stephan Neuhaus


On Apr 28, 2008, at 23:56, Perry E. Metzger wrote:


If you have a rotten apple engineer, he will be able to hide what he's
trying to do and make it look completely legit. If he's really good,
it may not be possible to catch what he's done EVEN IN PRINCIPLE.


Fred Cohen proved in 1984 in his "Computer Viruses, Theory and  
Experiments"[1] that "Program P is a virus" is undecidable. I assume  
that this result can be applied to hardware in the form that "Chip C  
contains malicious gates" is also undecidable. (Caveat: Cohen seems to  
make the fundamental assumption that there is no fundamental  
distinction between code and data, something that need not necessarily  
hold everywhere inside a computer chip.)


Fun,

Stephan

[1] See for example http://vx.netlux.org/lib/afc01.html

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Just update the microcode (was: Re: defending against evil in all layers of hardware and software)

2008-04-29 Thread Sebastian Krahmer

The "signature" in the microcode update has not the same
meaning as within crypto. For intel chips it has 31bits and basically
contains a revision number. The requirements for the BIOS for
checking microcode updates are in short: check the crc and ensure
that older revisions cant replace new ones by comparing the "signature".
I did not try myself, but I think one can probably update anything
if you just hexedit the update header.
Afaik these chips do not own any crypto-related functionallity
or storage capability (except precise timing and rand maybe) and
they are not tamper-proof. Thats why TPM was invented :-)

l8er,
Sebastian

On Mon, Apr 28, 2008 at 06:16:12PM -0400, John Ioannidis wrote:

> Intel and AMD processors can have new microcode loaded to them, and this 
> is usually done by the BIOS.  Presumably there is some asymmetric crypto 
> involved with the processor doing the signature validation.
> 
> A major power that makes a good fraction of the world's laptops and 
> desktops (and hence controls the circuitry and the BIOS, even if they do 
> not control the chip manufacturing process) would be in a good place to 
> introduce problems that way, no?
> 
> /ji
> 
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

-- 
~~
~~ perl self.pl
~~ $_='print"\$_=\47$_\47;eval"';eval
~~ [EMAIL PROTECTED] - SuSE Security Team
~~ SUSE LINUX Products GmbH, GF: Markus Rex, HRB 16746 (AG Nuernberg)

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Just update the microcode (was: Re: defending against evil in all layers of hardware and software)

2008-04-29 Thread John Ioannidis

[EMAIL PROTECTED] wrote:
No need to be a major power.  Linux patches x86 code, as does Windows.  I ran across a project several years ago that modified the microcode for some i/o x86 assembly instructions.  Here's a good link explaining it all.  



What the OS or the BIOS loads is files that come from Intel.

There is some verification involved, as the processor won't just accept 
random bytes. You'll need a fair amount of money, as well as 
intelligence expertise, to get hold of the signing keys, not to mention 
the documentation for how to write microcode in the first place.  I 
assume that's one of Intel's (and AMD's) closest-guarded secrets.




http://en.wikipedia.org/wiki/Microcode


"It must be true, I read it on the Internet" :)



All this hw/sw flexibility makes designing a good security system a real 
challenge.  You need a reference monitor somewhere in it that you can truly 
trust.

- Alex



That we agree on!

/ji




- Original Message -
From: "John Ioannidis" <[EMAIL PROTECTED]>
To: Cryptography 
Subject: Just update the microcode (was: Re: defending against 
evil in all layers of hardware and software)

Date: Mon, 28 Apr 2008 18:16:12 -0400


Intel and AMD processors can have new microcode loaded to them, and 
this is usually done by the BIOS.  Presumably there is some 
asymmetric crypto involved with the processor doing the signature 
validation.


A major power that makes a good fraction of the world's laptops and 
desktops (and hence controls the circuitry and the BIOS, even if 
they do not control the chip manufacturing process) would be in a 
good place to introduce problems that way, no?


/ji

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Just update the microcode (was: Re: defending against evil in all layers of hardware and software)

2008-04-29 Thread alex

No need to be a major power.  Linux patches x86 code, as does Windows.  I ran 
across a project several years ago that modified the microcode for some i/o x86 
assembly instructions.  Here's a good link explaining it all.  

http://en.wikipedia.org/wiki/Microcode

All this hw/sw flexibility makes designing a good security system a real 
challenge.  You need a reference monitor somewhere in it that you can truly 
trust.

- Alex


> - Original Message -
> From: "John Ioannidis" <[EMAIL PROTECTED]>
> To: Cryptography 
> Subject: Just update the microcode (was: Re: defending against 
> evil in all layers of hardware and software)
> Date: Mon, 28 Apr 2008 18:16:12 -0400
> 
> 
> Intel and AMD processors can have new microcode loaded to them, and 
> this is usually done by the BIOS.  Presumably there is some 
> asymmetric crypto involved with the processor doing the signature 
> validation.
> 
> A major power that makes a good fraction of the world's laptops and 
> desktops (and hence controls the circuitry and the BIOS, even if 
> they do not control the chip manufacturing process) would be in a 
> good place to introduce problems that way, no?
> 
> /ji
> 
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

> 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Just update the microcode (was: Re: defending against evil in all layers of hardware and software)

2008-04-28 Thread John Ioannidis
Intel and AMD processors can have new microcode loaded to them, and this 
is usually done by the BIOS.  Presumably there is some asymmetric crypto 
involved with the processor doing the signature validation.


A major power that makes a good fraction of the world's laptops and 
desktops (and hence controls the circuitry and the BIOS, even if they do 
not control the chip manufacturing process) would be in a good place to 
introduce problems that way, no?


/ji

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: defending against evil in all layers of hardware and software

2008-04-28 Thread Perry E. Metzger

John Denker <[EMAIL PROTECTED]> writes:
> This is an important discussion  
>
> The threats are real, and we need to defend against them.

I'm not sure how to feasibly defend against such things. It would seem
to require complete control over the entire design and supply chain,
which involves so many thousands of people who could be bribed that I
have serious doubts that it can be done perfectly.

> This should not be an occasion for idly wringing our hands, nor 
> sticking our head in the sand, nor looking under the lamp-post 
> where the looking is easy.  We need to look at all of this stuff.  
> And we can.  We are not defenseless.

I'll believe that when I see feasible defenses. So far as I can tell,
if you can't trust the hardware supplier, you're meat. I don't think
it is possible even in principle to validate the hardware after the
fact. Even if you could apply formal methods successfully, it isn't
even obvious how you would specify the property of the system that
you're trying to prove. "Never does anything bad" is kind of nebulous
if you're doing proofs.

> As in all security, we need not aim for absolute security.  An 
> often-useful approach is to do things that raise the cost to 
> the attacker out of proportion to the cost to the defender.

Well, this sort of thing is already not that interesting to an
ordinary spammer or phisher -- they have no trouble making loads of
money without engaging in such stuff.

If you're talking about what a national government might pay to get
such a back door in hardware, though, I think that it is probably
worth billions to such an entity. After all, a decent bomber these
days costs a billion dollars, and clearly this is a lot more potent.

Given that, I don't see what would work in practice. If a major power
wanted to turn a couple of engineers at the right place in the design
or supply chain, the amount of money needed is far below the amount in
play.

> For software, for firmware, and to some extent even for silicon
> masks, SCM (source code management) systems, if used wisely, can
> help a lot.

If you have a rotten apple engineer, he will be able to hide what he's
trying to do and make it look completely legit. If he's really good,
it may not be possible to catch what he's done EVEN IN PRINCIPLE. All
an SCM can do is tell you who put the bad stuff in much after the fact
if you ever catch it at all. That's not exactly "defense". It is at
best "post mortem".

> Of course we should insist on an open-source boot ROM code:
>   http://www.coreboot.org/Welcome_to_coreboot

Won't help. A bad apple can probably manage a sufficiently subtle flaw
that it won't be noticed by widespread code inspection. See Jerry's
earlier posting on the subject.

> Another line of defense involves closing the loop.  For example,
> one could in principle find Ken's trojan by disassembling the
> compiler and looking for code that doesn't seem to "belong".

Only if the bad guy doesn't anticipate that you might do that.

I'm pretty sure we can defend against this sort of thing a lot of the
time (by no means all) if it is done by quite ordinary criminals. If
it is done by really good people, I have very serious doubts.


-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


defending against evil in all layers of hardware and software

2008-04-28 Thread John Denker
This is an important discussion  

The threats are real, and we need to defend against them.

We need to consider the _whole_ problem, top to bottom.  The
layers that could be subverted include, at a minimum:
 -- The cpu chip itself (which set off the current flurry of
  interest).
 -- The boot rom.
 -- The BIOS code that lives on practically every card plugged
  into the PCI bus.
 -- Board-level stuff like memory controllers and I/O bridges.
 -- The operating system.
 -- Compilers, as Uncle Ken pointed out.
  http://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf
 -- Your "secure" application.
 -- Users.

As a particular example where PCs that we might wish to be secure 
are known to be under attack, consider electronic voting machines.  
In most cases there's a PC in there, running Windows CE.  Some
application software was provided by people with felony fraud
convictions.  Means, motive, and opportunity are all present.
There is ample evidence that "problems" have occurred.  These
are not confined to the Florida fiasco in 2000.  An example from 
2004 is the voting machine in Franklin County, Ohio that recorded 
4,258 votes for Bush when only 638 voters showed up.
  http://www.truthout.org/Conyersreport.pdf

This should not be an occasion for idly wringing our hands, nor 
sticking our head in the sand, nor looking under the lamp-post 
where the looking is easy.  We need to look at all of this stuff.  
And we can.  We are not defenseless.

As in all security, we need not aim for absolute security.  An 
often-useful approach is to do things that raise the cost to 
the attacker out of proportion to the cost to the defender.

For software, for firmware, and to some extent even for silicon
masks, SCM (source code management) systems, if used wisely, can
help a lot.  Consider for example a policy every delta to the 
software to be submitted by one person and tested by another
before being committed to the main branch of the project.  Both
the submitter and the tester would digitally sign the delta.  
This creates a long-tailed liability for anybody who tries to
sneak in a trojan  This is AFAIK the simplest defense against
high-grade attacks such as Ken's, which leave no long-term trace
in the source code (because the trojan is self-replicating).  
The point is that there is a long-term trace in the SCM logs.
We can make the logs effectively permanent and immutable.

Of course we should insist on an open-source boot ROM code:
  http://www.coreboot.org/Welcome_to_coreboot
The boot ROM should check the pgp signature of each PCI card's
BIOS code before letting it get control.  And then it should
check the pgp signature of the operating system before booting 
it.  I don't know of any machine that actually does this, but
it all seems perfectly doable.

Another line of defense involves closing the loop.  For example,
one could in principle find Ken's trojan by disassembling the
compiler and looking for code that doesn't seem to "belong".
I have personally disassembled a couple of operating systems
(although this was back when operating systems were smaller
than they are now).

We can similarly close the loop on chips.  As others have pointed
out, silicon has no secrets.  A cost-effective way to check for 
trojans would be to buy more voting machines than necessary, and 
choose /at random/ a few of them to be torn down for testing.
(This has obvious analogies to sampling methods used in many
crypto algorithms.)  For starters, we grind open the CPU chips
and check that they are all the same.  That's much easier than
checking the detailed functionality of each one.  And we check
that the CPUs in the voting machines are the same as CPUs from 
another source, perhaps WAL*MART, on the theory that the attacker
finds it harder to subvert all of WAL*MART than to subvert just
a truckload of voting machines.

Checksumming the boot ROM in the torn-down machine is easy.  I
emphasize that we should *not* rely on asking a running machine
to checksum its own ROMs, because it is just too easy to subvert
the program that calculates the checksum.  To defend against
this, we tear down multiple machines, and give one randomly
selected ROM to the Democratic pollwatchers, one to the Republican 
pollwatchers, et cetera.  This way nobody needs to trusty anybody
else;  each guy is responsible for making sure _his_ checksummer
is OK.

All of this works hand-in-glove with old-fashioned procedural
security and physical security.  As the saying goes, if you
don't have physical security, you don't have security.  But
the converse is true, too:  Placing armed guards around a vault 
full of voting machines doesn't make the machines any less buggy 
than they were when they went into the vault. That's why we need 
a balanced approach that gets all the layers to work together.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMA