Re: Designing and implementing malicious hardware

2008-04-26 Thread Leichter, Jerry
On Thu, 24 Apr 2008, Jacob Appelbaum wrote:
| Perry E. Metzger wrote:
|  A pretty scary paper from the Usenix LEET conference:
|  
|  http://www.usenix.org/event/leet08/tech/full_papers/king/king_html/
|  
|  The paper describes how, by adding a very small number of gates to a
|  microprocessor design (small enough that it would be hard to notice
|  them), you can create a machine that is almost impossible to defend
|  against an attacker who possesses a bit of secret knowledge. I
|  suggest reading it -- I won't do it justice with a small summary.
|  
|  It is about the most frightening thing I've seen in years -- I have
|  no idea how one might defend against it.
| 
| Silicon has no secrets.
| 
| I spent last weekend in Seattle and Bunnie (of XBox hacking
| fame/Chumby) gave a workshop with Karsten Nohl (who recently cracked
| MiFare).
| 
| In a matter of an hour, all of the students were able to take a
| selection of a chip (from an OK photograph) and walk through the
| transistor layout to describe the gate configuration. I was surprised
| (not being an EE person by training) at how easy it can be to
| understand production hardware. Debug pads, automated masking,
| etc. Karsten has written a set of MatLab extensions that he used to
| automatically describe the circuits of the mifare devices. Automation
| is key though, I think doing it by hand is the path of madness.
While analysis of the actual silicon will clearly have to be part of
any solution, it's going to be much harder than that:

1.  Critical circuitry will likely be tamper-resistant.
Tamper-resistance techniques make it hard to see what's
there, too.  So, paradoxically, the very mechanisms used
to protect circuitry against one attack make it more
vulnerable to another.  What this highlights, perhaps,
is the need for transparent tamper-resistance techniques,
which prevent tampering but don't interfere with inspec-
tion.

2.  An experienced designer can readily understand circuitry
that was designed normally.  This is analogous to the
ability of an experience C programmer to understand what a
normal, decently-designed C program is doing.  Under-
standing what a poorly designed C program is doing is a
whole other story - just look at the history of the
Obfuscated C contests.  At least in that case, an
experienced analyst can raise the alarm that something
wierd is going on .  But what *deliberately deceptive*
C code?  Look up Underhanded C Contest on Wikipedia.
The 2007 contest was to write a program that implements
a standard, reliable encryption algorithm, which some
percentage of the time makes the data easy to decrypt
(if you know how) - and which will look innocent to
an analyst.  There have been two earlier contests.
I remember seeing another, similar contest in which
the goal was to produce a vote-counting program that
looked completely correct, but biased the results.
The winner was amazingly good - I consider myself
pretty good at analyzing code, but even knowing that
this code had a hook in it, I missed it completely.
Worse, none of the code even set of my why is it
doing *that* detector.

3.  This is another step in a long line of attacks that
attack something by moving to a lower-level of abstraction
and using that to invalidate the assumptions that
implementations at higher levels of abstraction use.
There's a level below logic gates, the actual circuitry.
A paper dating back to 1999 - Analysis of Unconventional
Evolved Electronics, CACM V42#4 (it doesn't seem to be
available on-line) reported on experiments using genetic
algorithms to evolve an FPGA design to solve a simple
program (something like generate a -.5V output if you
see a 200Hz input, and a +1V output if you see a 2KHz
input).  The genetic algorithm ran at the design level,
but fitness testing was done on actual, synthesized
circuits.

A human engineer given this problem would have used a
counter chain of some sort.  The evolved circuit had
nothing that looked remotely like a counter chain.  But
it worked ... and the experimenters couldn't figure out
exactly how.  Probing the FPGA generally caused it to
stop working.  The design included unconnected gates -
which, if removed, caused the circuit to stop working.
Presumably, the circuit was relying on the analogue
characteristics of the FPGA rather than its nominal
digital 

RE: Designing and implementing malicious hardware

2008-04-26 Thread Crawford Nathan-HMGT87
I suppose Ken Thompson's, Reflections on Trusting Trust is appropriate
here.  This kind of vulnerability has been known about for quite some
time, but did not have much relevance until the advent of ubiquitous
networking.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-26 Thread Karsten Nohl


Jacob Appelbaum wrote:

Perry E. Metzger wrote:

A pretty scary paper from the Usenix LEET conference:

http://www.usenix.org/event/leet08/tech/full_papers/king/king_html/

The paper describes how, by adding a very small number of gates to a
microprocessor design (small enough that it would be hard to notice
them), you can create a machine that is almost impossible to defend
against an attacker who possesses a bit of secret knowledge. I suggest
reading it -- I won't do it justice with a small summary.

It is about the most frightening thing I've seen in years -- I have no
idea how one might defend against it.



Silicon has no secrets.

I spent last weekend in Seattle and Bunnie (of XBox hacking fame/Chumby)
gave a workshop with Karsten Nohl (who recently cracked MiFare).

In a matter of an hour, all of the students were able to take a
selection of a chip (from an OK photograph) and walk through the
transistor layout to describe the gate configuration. I was surprised
(not being an EE person by training) at how easy it can be to understand
production hardware. Debug pads, automated masking, etc. Karsten has
written a set of MatLab extensions that he used to automatically
describe the circuits of the mifare devices. Automation is key though, I
think doing it by hand is the path of madness.

If we could convince (this is the hard part) companies to publish what
they think their chips should look like, we'd have a starting point.

Perhaps,
Jacob


Silicon has no secrets, indeed. But it's also much too complex for 
exhaustive functionality tests; in particular if the tests are open 
ended as they need to be when hunting for backdoors.


While a single chip designer will perhaps not have the authority needed 
to significantly alter functionality, a small team of designers could 
very well adopt their part of a design and introduce a backdoor.


Hardware designs currently move away from what in software would be open 
source. Chip obfuscation meant to protect IP combined with the ever 
increasing size of chips makes it almost impossible to reverse-engineer 
an entire chip.


Bunnie pointed out that the secret debugging features of current 
processors perhaps already include functionality that breaks process 
separation. The fact that these features stay secret suggest that it is 
in fact hard to detect any undocumented functionality.


Assuming that hardware backdoors can be build, the interesting question 
becomes how to defeat against them. Even after a particular triggering 
string is identified, it is not clear whether software can be used to 
detect malicious programs. It almost appears as if the processor would 
need a hardware-based virus-scanner or sorts. This scanner could be 
simple as it only has to match known signatures, but would need have 
access to a large number of internal data structures while being 
developed by a completely separate team of designers.


-Karsten

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-26 Thread Anne Lynn Wheeler

Leichter, Jerry wrote:

While analysis of the actual silicon will clearly have to be part of
any solution, it's going to be much harder than that:

1.  Critical circuitry will likely be tamper-resistant.
Tamper-resistance techniques make it hard to see what's
there, too.  So, paradoxically, the very mechanisms used
to protect circuitry against one attack make it more
vulnerable to another.  What this highlights, perhaps,
is the need for transparent tamper-resistance techniques,
which prevent tampering but don't interfere with inspec-
tion.
   


traditional approach is to make the compromise more expensive that any
reasonable expectation of benefit (security proportional to risk).

helping bracket expected fraud ROI is an infrastructure that can (quickly)
invalidate (identified) compromised units. there has been some issues
with these kinds of infrastructures since they have also been identified
with being able to support DRM ( other kinds of anti-piracy) efforts.

disclaimer: we actually have done some number of patents (that are 
assigned)

in this area
http://www.garlic.com/~lynn/aadssummary.htm

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-26 Thread Adam Fields
On Sat, Apr 26, 2008 at 02:33:11AM -0400, Karsten Nohl wrote:
[...]
 Assuming that hardware backdoors can be build, the interesting question 
 becomes how to defeat against them. Even after a particular triggering 
 string is identified, it is not clear whether software can be used to 
 detect malicious programs. It almost appears as if the processor would 
 need a hardware-based virus-scanner or sorts. This scanner could be 
 simple as it only has to match known signatures, but would need have 
 access to a large number of internal data structures while being 
 developed by a completely separate team of designers.

Wouldn't it be fun to assume that these are already present in all
sorts of devices?

-- 
- Adam

** Expert Technical Project and Business Management
 System Performance Analysis and Architecture
** [ http://www.adamfields.com ]

[ http://www.morningside-analytics.com ] .. Latest Venture
[ http://www.confabb.com ]  Founder
[ http://www.aquick.org/blog ]  Blog
[ http://www.adamfields.com/resume.html ].. Experience
[ http://www.flickr.com/photos/fields ] ... Photos
[ http://www.aquicki.com/wiki ].Wiki

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]