Re: Designing and implementing malicious hardware

2008-04-29 Thread COMINT
There are high assurance systems that exist that do eactly this. There
are two different implementations of the security unit processing the
same data. The outputs are compared by a seperate high assurance and
validated module that enters into an alarm mode should the outputs
differ.

However, these are generally costly affairs, you need to pay two
implementation teams etc, therefore remain the luxury of only the most
critical systems.


For hardware, this
 would mean running multiple chips in parallel checking each others
 states/outputs.  Architectures like that have been built for
 reliability (e.g., Stratus), but generally they assume identical
 processors.  Whether you can actually build such a thing with
 deliberately different processors is an open question.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Leichter, Jerry
On Sat, 26 Apr 2008, Karsten Nohl wrote:
| Assuming that hardware backdoors can be build, the interesting
| question becomes how to defeat against them. Even after a particular
| triggering string is identified, it is not clear whether software can
| be used to detect malicious programs. It almost appears as if the
| processor would need a hardware-based virus-scanner or sorts. This
| scanner could be simple as it only has to match known signatures, but
| would need have access to a large number of internal data structures
| while being developed by a completely separate team of designers.
I suspect the only heavy-weight defense is the same one we use against
the Trusting Trust hook-in-the-compiler attack:  Cross-compile on
as many compilers from as many sources as you can, on the assumption
that not all compilers contain the same hook.  For hardware, this
would mean running multiple chips in parallel checking each others
states/outputs.  Architectures like that have been built for
reliability (e.g., Stratus), but generally they assume identical
processors.  Whether you can actually build such a thing with
deliberately different processors is an open question.  While in
theory someone could introduce the same spike into Intel, AMD,
and VIA chips, an attacker with that kind of capability is probably
already reading your mind directly anyway.

Of course, you'd end up with a machine no faster than your slowest
chip, and you'd have to worry about the correctness of the glue
circuitry that compares the results.  *Maybe* the NSA would build
such things for very special uses.  Whether it would be cheaper for
them to just build their own chip fab isn't at all clear.  (One thing
mentioned in the paper is that there are only 30 plants in the world
that can build leading-edge chips today, and that it simply isn't
practical any more to build your own.  I think the important issue
here is leading edge.  Yes, if you need the best performance, you
have few choices.  But a chip with 5-year-old technology is still
very powerful - more than powerful enough for many uses.  When it
comes to obsolete technology, you may have more choices - and
of course next year's 5 year old technology will be even more
powerful.  Yes, 5 years from now, there will only be 30 or so
plants with 2008 technology - but the stuff needed to build such a
plant will be available used, or as cheap versions of newer stuff,
so building your own will be much more practical.)

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Ed Gerck

Leichter, Jerry wrote:

I suspect the only heavy-weight defense is the same one we use against
the Trusting Trust hook-in-the-compiler attack:  Cross-compile on
as many compilers from as many sources as you can, on the assumption
that not all compilers contain the same hook. 
...

Of course, you'd end up with a machine no faster than your slowest
chip, and you'd have to worry about the correctness of the glue
circuitry that compares the results. 


Each chip does not have to be 100% independent, and does not have to 
be used 100% of the time.


Assuming a random selection of both outputs and chips for testing, and 
a finite set of possible outputs, it is possible to calculate what 
sampling ratio would provide an adequate confidence level -- a good 
guess is 5% sampling.


This should not create a significant impact on average speed, as 95% 
of the time the untested samples would not have to wait for 
verification (from the slower chips). One could also trust-certify 
each chip based on its positive, long term performance -- which could 
allow that chip to run with much less sampling, or none at all.


In general, this approach is based on the properties of trust when 
viewed in terms of Shannon's IT method, as explained in [*]. Trust is 
seen not as a subjective property, but as something that can be 
communicated and measured. One of the resulting rules is that trust 
cannot be communicated by self-assertions (ie, asking the same chip) 
[**]. Trust can be positive (what we call trust), negative (distrust), 
and zero (atrust -- there is no trust value associated with the 
information, neither trust nor distrust). More in [*].


Cheers,
Ed Gerck

 References:
[*] www.nma.com/papers/it-trust-part1.pdf
www.mcwg.org/mcg-mirror/trustdef.htm

[**] Ken's paper title (op. cit.) is, thus, identified to be part of 
the very con game described in the paper.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Perry E. Metzger

Ed Gerck [EMAIL PROTECTED] writes:
 Each chip does not have to be 100% independent, and does not have to
 be used 100% of the time.

 Assuming a random selection of both outputs and chips for testing, and
 a finite set of possible outputs, it is possible to calculate what
 sampling ratio would provide an adequate confidence level -- a good
 guess is 5% sampling.

Not likely.

Sampling will not work. Sampling theory assumes statistical
independence and that the events that you're looking for are randomly
distributed. We're dealing with a situation in which the opponent is
doing things that are very much in violation of those assumptions.

The opponent is, on very very rare occasions, going to send you a
malicious payload that will do something bad. Almost all the time
they're going to do nothing at all. You need to be watching 100% of
the time if you're going to catch him with reasonable confidence, but
of course, I doubt even that will work given a halfway smart attacker.

The paper itself describes reasonable ways to prevent detection on the
basis of most other obvious methods -- power utilization, timing
issues, etc, can all be patched over well enough to render the
malhardware invisible to ordinary methods of analysis.

Truth be told, I think there is no defense against malicious hardware
that I've heard of that will work reliably, and indeed I'm not sure
that one can be devised.


Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Leichter, Jerry
On Mon, 28 Apr 2008, Ed Gerck wrote:
| Leichter, Jerry wrote:
|  I suspect the only heavy-weight defense is the same one we use against
|  the Trusting Trust hook-in-the-compiler attack:  Cross-compile on
|  as many compilers from as many sources as you can, on the assumption
|  that not all compilers contain the same hook. ...
|  Of course, you'd end up with a machine no faster than your slowest
|  chip, and you'd have to worry about the correctness of the glue
|  circuitry that compares the results. 
| 
| Each chip does not have to be 100% independent, and does not have to
| be used 100% of the time.
| 
| Assuming a random selection of both outputs and chips for testing, and
| a finite set of possible outputs, it is possible to calculate what
| sampling ratio would provide an adequate confidence level -- a good
| guess is 5% sampling.
I'm not sure how you would construct a probability distribution that's
useful for this purpose.  Consider the form of one attack demonstrated
in the paper:  If a particular 64-bit value appears in a network packet,
the code will jump to the immediately succeeding byte in the packet.
Let's for the sake of argument assume that you will never, by chance,
see this 64-bit value across all chip instances across the life of the
chip.  (If you don't think 64 bits is enough to ensure that, use 128
or 256 or whatever.)  Absent an attack, you'll never see any deviation
from the theoretical behavior.  Once, during the lifetime of the system,
an attack is mounted which, say, grabs a single AES key from memory and
inserts it into the next outgoing network packet.  That should take no
more than a few tens of instructions.  What's the probability of your
catching that with any kind of sampling?

| This should not create a significant impact on average speed, as 95%
| of the time the untested samples would not have to wait for
| verification (from the slower chips). 
I don't follow this.  Suppose the system has been running for 1 second,
and you decide to compare states.  The slower system has only completed
a tenth of the instructions completed by the faster.  You now have to
wait .9 seconds for the slower one to catch up before you have anything
to compare.

If you could quickly load the entire state of the faster system just
before the instruction whose results you want to compare into the
slower one, you would only have to wait one of the slower systems's
instruction times - but how would you do that?  Even assuming a
simple mapping between the full states of disparate systems, the
state is *huge* - all of memory, all the registers, hidden information
(cache entries, branch prediction buffers).  Yes, only a small amount
of it is relevant to the next instruction - but (a) how can you
find it; (b) how can you find it *given that the actual execution of
the next instruction may be arbitrarily different from what the
system model claims*?

|   One could also trust-certify
| each chip based on its positive, long term performance -- which could
| allow that chip to run with much less sampling, or none at all.
Long term-performance against a targetted attack means nothing.

| In general, this approach is based on the properties of trust when
| viewed in terms of Shannon's IT method, as explained in [*]. Trust is
| seen not as a subjective property, but as something that can be
| communicated and measured.  One of the resulting rules is that trust
| cannot be communicated by self-assertions (ie, asking the same chip)
| [**]. Trust can be positive (what we call trust), negative (distrust),
| and zero (atrust -- there is no trust value associated with the
| information, neither trust nor distrust). More in [*].
The papers look interesting and I'll have a look at them, but if you
want to measure trust, you have to have something to start with.  What
we are dealing with here is the difference between a random fault and
a targetted attack.  It's quite true that long experience with a chip
entitles you to trust that, given random data, it will most likely
produce the right results.  But no amount of testing can possibly
lead to proper trust that there isn't a special value that will induce
different behavior.
-- Jerry

| Cheers,
| Ed Gerck
| 
|  References:
| [*] www.nma.com/papers/it-trust-part1.pdf
| www.mcwg.org/mcg-mirror/trustdef.htm
| 
| [**] Ken's paper title (op. cit.) is, thus, identified to be part of
| the very con game described in the paper.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Ed Gerck

Perry E. Metzger wrote:

Ed Gerck [EMAIL PROTECTED] writes:

Each chip does not have to be 100% independent, and does not have to
be used 100% of the time.

Assuming a random selection of both outputs and chips for testing, and
a finite set of possible outputs, it is possible to calculate what
sampling ratio would provide an adequate confidence level -- a good
guess is 5% sampling.


Not likely.

Sampling will not work. Sampling theory assumes statistical
independence and that the events that you're looking for are randomly
distributed. 


Provided you have access to enough chip diversity so as to build a 
correction channel with sufficient capacity, Shannon's Tenth Theorem 
assures you that it is possible to reduce the effect of bad chips on 
the output to an error rate /as close to zero/ as you desire. There is 
no lower, limiting value but zero.


Statistical independence is not required to be 100%. Events are not 
required to be randomly flat either. Sampling is required to  be 
independent, but also not 100%.



We're dealing with a situation in which the opponent is
doing things that are very much in violation of those assumptions.


The counter-point is that the existence of a violation can be tested 
within a desired confidence level, which confidence level is dynamic.



The opponent is, on very very rare occasions, going to send you a
malicious payload that will do something bad. Almost all the time
they're going to do nothing at all. You need to be watching 100% of
the time if you're going to catch him with reasonable confidence, but
of course, I doubt even that will work given a halfway smart attacker.


The more comparison channels you have, and the more independent they 
are, the harder it is to compromise them /at the same time/.


In regard to time, one strategy is indeed to watch 100% of the time 
but for random windows of certain lengths and intervals. The duty 
ratio for a certain desired detection threshold depends on the 
correction channel total capacity, the signal dynamics, and some other 
variables. Different implementations will allow for different duty 
ratios for the same error detection capability.



The paper itself describes reasonable ways to prevent detection on the
basis of most other obvious methods -- power utilization, timing
issues, etc, can all be patched over well enough to render the
malhardware invisible to ordinary methods of analysis.


Except as above; using a correction channel with enough capacity the 
problem can /always/ be solved (ie, with an error rate as close to 
zero as desired).



Truth be told, I think there is no defense against malicious hardware
that I've heard of that will work reliably, and indeed I'm not sure
that one can be devised.


As above, the problem is solvable (existence proof provided by 
Shannon's Tenth Theorem).  It is not a matter of whether it works -- 
the solution exists; it's a matter of implementation.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Perry E. Metzger

Ed Gerck [EMAIL PROTECTED] writes:
 Perry E. Metzger wrote:
 Ed Gerck [EMAIL PROTECTED] writes:
 Each chip does not have to be 100% independent, and does not have to
 be used 100% of the time.

 Assuming a random selection of both outputs and chips for testing, and
 a finite set of possible outputs, it is possible to calculate what
 sampling ratio would provide an adequate confidence level -- a good
 guess is 5% sampling.

 Not likely.

 Sampling will not work. Sampling theory assumes statistical
 independence and that the events that you're looking for are randomly
 distributed. 

 Provided you have access to enough chip diversity so as to build a
 correction channel with sufficient capacity, Shannon's Tenth Theorem
 assures you that it is possible to reduce the effect of bad chips on
 the output to an error rate /as close to zero/ as you desire. There is
 no lower, limiting value but zero.

No. It really does not. Shannon's tenth theorem is about correcting
lossy channels with statistically random noise. This is about making
sure something bad doesn't happen to your computer like having someone
transmit blocks of your hard drive out on the network. I assure you
that Shannon's theorem doesn't speak about that possibility. The two
are not really related. It would be wonderful if they were, but they
aren't. Indeed, Shannon's tenth theorem doesn't even hold for error
correction if the noise on the channel is produced by an adversary
rather than being random.

I'm not particularly inclined to argue this at length.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Ed Gerck

Perry E. Metzger wrote:

No. It really does not. Shannon's tenth theorem is about correcting
lossy channels with statistically random noise. This is about making
sure something bad doesn't happen to your computer like having someone
transmit blocks of your hard drive out on the network. I assure you
that Shannon's theorem doesn't speak about that possibility. 


Yet, Shannons' tenth theorem can be proven without a hypothesis that 
noise is random, or that the signal is anything in particular.


Using intuition, because no formality is really needed, just consider 
that the noise is a well-defined sinus function. The error-correcting 
channel provides the same sinus function in counter phase. You will 
see that the less random the noise is, the easier it gets. Not the 
other around.


How about an active adversary? You just need to consider the 
adversary's reaction time and make sure that the error-correcting 
channel has enough capacity to counter-react within that reaction 
time. For chip fabrication, this may be quite long.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Perry E. Metzger

Ed Gerck [EMAIL PROTECTED] writes:
 Perry E. Metzger wrote:
 No. It really does not. Shannon's tenth theorem is about correcting
 lossy channels with statistically random noise. This is about making
 sure something bad doesn't happen to your computer like having someone
 transmit blocks of your hard drive out on the network. I assure you
 that Shannon's theorem doesn't speak about that possibility. 

 Yet, Shannons' tenth theorem can be proven without a hypothesis that
 noise is random, or that the signal is anything in particular.

Not quite. If I inject noise into a channel in the right way, I can
completely eradicate the signal. For example, I can inject a different
signal of exactly opposite phase.

However, in any case, this doesn't matter. We're not talking about
receiving a signal without errors at all. We're talking about assuring
that your microprocessor possesses no features such that it does
something evil, and that something can be completely in addition to
doing the things that you expect it to do, which it might continue to
do without pause.

Lets be completely concrete here. Nothing you have suggested would
work against the described attack in the paper AT ALL. You cannot find
evil chips with statistical sampling because you don't know what to
look for, and you can't detect them by running them part of the time
against good chips because they only behave evilly once in a blue moon
when the attacker chooses to have them behave that way. Indeed, I
don't even see how someone who had read the paper could suggest what
you have -- it makes no sense in context.

And with that, I'm cutting off this branch of the conversation.

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-27 Thread Ivan Krstić

On Apr 25, 2008, at 11:09 AM, Leichter, Jerry wrote:

I remember seeing another, similar contest in which
the goal was to produce a vote-counting program that
looked completely correct, but biased the results.
The winner was amazingly good - I consider myself
pretty good at analyzing code, but even knowing that
this code had a hook in it, I missed it completely.
Worse, none of the code even set of my why is it
doing *that* detector.


I was reminded of the same contest[0]. The winning date-agnostic  
entry[1] was by Michał Zalewski[2], and is nothing short of evil. I  
spotted the problem after staring at the code intensely for about a  
half hour, knowing in advance it was there. Had I not known, I don't  
think I'd have found it.


[0] http://graphics.stanford.edu/~danielrh/vote/vote.html
[1] http://graphics.stanford.edu/~danielrh/vote/mzalewski.c
[2] http://en.wikipedia.org/wiki/Micha%C5%82_Zalewski

--
Ivan Krstić [EMAIL PROTECTED] | http://radian.org

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-26 Thread Leichter, Jerry
On Thu, 24 Apr 2008, Jacob Appelbaum wrote:
| Perry E. Metzger wrote:
|  A pretty scary paper from the Usenix LEET conference:
|  
|  http://www.usenix.org/event/leet08/tech/full_papers/king/king_html/
|  
|  The paper describes how, by adding a very small number of gates to a
|  microprocessor design (small enough that it would be hard to notice
|  them), you can create a machine that is almost impossible to defend
|  against an attacker who possesses a bit of secret knowledge. I
|  suggest reading it -- I won't do it justice with a small summary.
|  
|  It is about the most frightening thing I've seen in years -- I have
|  no idea how one might defend against it.
| 
| Silicon has no secrets.
| 
| I spent last weekend in Seattle and Bunnie (of XBox hacking
| fame/Chumby) gave a workshop with Karsten Nohl (who recently cracked
| MiFare).
| 
| In a matter of an hour, all of the students were able to take a
| selection of a chip (from an OK photograph) and walk through the
| transistor layout to describe the gate configuration. I was surprised
| (not being an EE person by training) at how easy it can be to
| understand production hardware. Debug pads, automated masking,
| etc. Karsten has written a set of MatLab extensions that he used to
| automatically describe the circuits of the mifare devices. Automation
| is key though, I think doing it by hand is the path of madness.
While analysis of the actual silicon will clearly have to be part of
any solution, it's going to be much harder than that:

1.  Critical circuitry will likely be tamper-resistant.
Tamper-resistance techniques make it hard to see what's
there, too.  So, paradoxically, the very mechanisms used
to protect circuitry against one attack make it more
vulnerable to another.  What this highlights, perhaps,
is the need for transparent tamper-resistance techniques,
which prevent tampering but don't interfere with inspec-
tion.

2.  An experienced designer can readily understand circuitry
that was designed normally.  This is analogous to the
ability of an experience C programmer to understand what a
normal, decently-designed C program is doing.  Under-
standing what a poorly designed C program is doing is a
whole other story - just look at the history of the
Obfuscated C contests.  At least in that case, an
experienced analyst can raise the alarm that something
wierd is going on .  But what *deliberately deceptive*
C code?  Look up Underhanded C Contest on Wikipedia.
The 2007 contest was to write a program that implements
a standard, reliable encryption algorithm, which some
percentage of the time makes the data easy to decrypt
(if you know how) - and which will look innocent to
an analyst.  There have been two earlier contests.
I remember seeing another, similar contest in which
the goal was to produce a vote-counting program that
looked completely correct, but biased the results.
The winner was amazingly good - I consider myself
pretty good at analyzing code, but even knowing that
this code had a hook in it, I missed it completely.
Worse, none of the code even set of my why is it
doing *that* detector.

3.  This is another step in a long line of attacks that
attack something by moving to a lower-level of abstraction
and using that to invalidate the assumptions that
implementations at higher levels of abstraction use.
There's a level below logic gates, the actual circuitry.
A paper dating back to 1999 - Analysis of Unconventional
Evolved Electronics, CACM V42#4 (it doesn't seem to be
available on-line) reported on experiments using genetic
algorithms to evolve an FPGA design to solve a simple
program (something like generate a -.5V output if you
see a 200Hz input, and a +1V output if you see a 2KHz
input).  The genetic algorithm ran at the design level,
but fitness testing was done on actual, synthesized
circuits.

A human engineer given this problem would have used a
counter chain of some sort.  The evolved circuit had
nothing that looked remotely like a counter chain.  But
it worked ... and the experimenters couldn't figure out
exactly how.  Probing the FPGA generally caused it to
stop working.  The design included unconnected gates -
which, if removed, caused the circuit to stop working.
Presumably, the circuit was relying on the analogue
characteristics of the FPGA rather than its nominal
digital 

RE: Designing and implementing malicious hardware

2008-04-26 Thread Crawford Nathan-HMGT87
I suppose Ken Thompson's, Reflections on Trusting Trust is appropriate
here.  This kind of vulnerability has been known about for quite some
time, but did not have much relevance until the advent of ubiquitous
networking.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-26 Thread Karsten Nohl


Jacob Appelbaum wrote:

Perry E. Metzger wrote:

A pretty scary paper from the Usenix LEET conference:

http://www.usenix.org/event/leet08/tech/full_papers/king/king_html/

The paper describes how, by adding a very small number of gates to a
microprocessor design (small enough that it would be hard to notice
them), you can create a machine that is almost impossible to defend
against an attacker who possesses a bit of secret knowledge. I suggest
reading it -- I won't do it justice with a small summary.

It is about the most frightening thing I've seen in years -- I have no
idea how one might defend against it.



Silicon has no secrets.

I spent last weekend in Seattle and Bunnie (of XBox hacking fame/Chumby)
gave a workshop with Karsten Nohl (who recently cracked MiFare).

In a matter of an hour, all of the students were able to take a
selection of a chip (from an OK photograph) and walk through the
transistor layout to describe the gate configuration. I was surprised
(not being an EE person by training) at how easy it can be to understand
production hardware. Debug pads, automated masking, etc. Karsten has
written a set of MatLab extensions that he used to automatically
describe the circuits of the mifare devices. Automation is key though, I
think doing it by hand is the path of madness.

If we could convince (this is the hard part) companies to publish what
they think their chips should look like, we'd have a starting point.

Perhaps,
Jacob


Silicon has no secrets, indeed. But it's also much too complex for 
exhaustive functionality tests; in particular if the tests are open 
ended as they need to be when hunting for backdoors.


While a single chip designer will perhaps not have the authority needed 
to significantly alter functionality, a small team of designers could 
very well adopt their part of a design and introduce a backdoor.


Hardware designs currently move away from what in software would be open 
source. Chip obfuscation meant to protect IP combined with the ever 
increasing size of chips makes it almost impossible to reverse-engineer 
an entire chip.


Bunnie pointed out that the secret debugging features of current 
processors perhaps already include functionality that breaks process 
separation. The fact that these features stay secret suggest that it is 
in fact hard to detect any undocumented functionality.


Assuming that hardware backdoors can be build, the interesting question 
becomes how to defeat against them. Even after a particular triggering 
string is identified, it is not clear whether software can be used to 
detect malicious programs. It almost appears as if the processor would 
need a hardware-based virus-scanner or sorts. This scanner could be 
simple as it only has to match known signatures, but would need have 
access to a large number of internal data structures while being 
developed by a completely separate team of designers.


-Karsten

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-26 Thread Anne Lynn Wheeler

Leichter, Jerry wrote:

While analysis of the actual silicon will clearly have to be part of
any solution, it's going to be much harder than that:

1.  Critical circuitry will likely be tamper-resistant.
Tamper-resistance techniques make it hard to see what's
there, too.  So, paradoxically, the very mechanisms used
to protect circuitry against one attack make it more
vulnerable to another.  What this highlights, perhaps,
is the need for transparent tamper-resistance techniques,
which prevent tampering but don't interfere with inspec-
tion.
   


traditional approach is to make the compromise more expensive that any
reasonable expectation of benefit (security proportional to risk).

helping bracket expected fraud ROI is an infrastructure that can (quickly)
invalidate (identified) compromised units. there has been some issues
with these kinds of infrastructures since they have also been identified
with being able to support DRM ( other kinds of anti-piracy) efforts.

disclaimer: we actually have done some number of patents (that are 
assigned)

in this area
http://www.garlic.com/~lynn/aadssummary.htm

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-26 Thread Adam Fields
On Sat, Apr 26, 2008 at 02:33:11AM -0400, Karsten Nohl wrote:
[...]
 Assuming that hardware backdoors can be build, the interesting question 
 becomes how to defeat against them. Even after a particular triggering 
 string is identified, it is not clear whether software can be used to 
 detect malicious programs. It almost appears as if the processor would 
 need a hardware-based virus-scanner or sorts. This scanner could be 
 simple as it only has to match known signatures, but would need have 
 access to a large number of internal data structures while being 
 developed by a completely separate team of designers.

Wouldn't it be fun to assume that these are already present in all
sorts of devices?

-- 
- Adam

** Expert Technical Project and Business Management
 System Performance Analysis and Architecture
** [ http://www.adamfields.com ]

[ http://www.morningside-analytics.com ] .. Latest Venture
[ http://www.confabb.com ]  Founder
[ http://www.aquick.org/blog ]  Blog
[ http://www.adamfields.com/resume.html ].. Experience
[ http://www.flickr.com/photos/fields ] ... Photos
[ http://www.aquicki.com/wiki ].Wiki

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-24 Thread Jacob Appelbaum
Perry E. Metzger wrote:
 A pretty scary paper from the Usenix LEET conference:
 
 http://www.usenix.org/event/leet08/tech/full_papers/king/king_html/
 
 The paper describes how, by adding a very small number of gates to a
 microprocessor design (small enough that it would be hard to notice
 them), you can create a machine that is almost impossible to defend
 against an attacker who possesses a bit of secret knowledge. I suggest
 reading it -- I won't do it justice with a small summary.
 
 It is about the most frightening thing I've seen in years -- I have no
 idea how one might defend against it.
 

Silicon has no secrets.

I spent last weekend in Seattle and Bunnie (of XBox hacking fame/Chumby)
gave a workshop with Karsten Nohl (who recently cracked MiFare).

In a matter of an hour, all of the students were able to take a
selection of a chip (from an OK photograph) and walk through the
transistor layout to describe the gate configuration. I was surprised
(not being an EE person by training) at how easy it can be to understand
production hardware. Debug pads, automated masking, etc. Karsten has
written a set of MatLab extensions that he used to automatically
describe the circuits of the mifare devices. Automation is key though, I
think doing it by hand is the path of madness.

If we could convince (this is the hard part) companies to publish what
they think their chips should look like, we'd have a starting point.

Perhaps,
Jacob

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]