RE: more on malicious hardware

2008-04-28 Thread Scott Guthery
 
Adding a backdoor to chips is a different story, though, since that would
require cutting a second set of masks. 
I am assuming that there must be no backdoor in the legitimately produced
chips since the client would detect 
it as a slight violation of some of their timing simulations. The client
also often inspects the masks before 
the chips are produced and basically reverse-engineers the whole chip on
that level.

A backdoor -- hardware or software -- in a smart card or TPM would be
difficult to detect by either of these means.  In the case that nation A is
buying these from nation F, don't you think that nation F would be motivated
to slip in a couple extra lines of code or a couple extra 100 gates just in
case?  If A got into a tangle with C, F would in a very strong position.  

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Leichter, Jerry
On Sat, 26 Apr 2008, Karsten Nohl wrote:
| Assuming that hardware backdoors can be build, the interesting
| question becomes how to defeat against them. Even after a particular
| triggering string is identified, it is not clear whether software can
| be used to detect malicious programs. It almost appears as if the
| processor would need a hardware-based virus-scanner or sorts. This
| scanner could be simple as it only has to match known signatures, but
| would need have access to a large number of internal data structures
| while being developed by a completely separate team of designers.
I suspect the only heavy-weight defense is the same one we use against
the Trusting Trust hook-in-the-compiler attack:  Cross-compile on
as many compilers from as many sources as you can, on the assumption
that not all compilers contain the same hook.  For hardware, this
would mean running multiple chips in parallel checking each others
states/outputs.  Architectures like that have been built for
reliability (e.g., Stratus), but generally they assume identical
processors.  Whether you can actually build such a thing with
deliberately different processors is an open question.  While in
theory someone could introduce the same spike into Intel, AMD,
and VIA chips, an attacker with that kind of capability is probably
already reading your mind directly anyway.

Of course, you'd end up with a machine no faster than your slowest
chip, and you'd have to worry about the correctness of the glue
circuitry that compares the results.  *Maybe* the NSA would build
such things for very special uses.  Whether it would be cheaper for
them to just build their own chip fab isn't at all clear.  (One thing
mentioned in the paper is that there are only 30 plants in the world
that can build leading-edge chips today, and that it simply isn't
practical any more to build your own.  I think the important issue
here is leading edge.  Yes, if you need the best performance, you
have few choices.  But a chip with 5-year-old technology is still
very powerful - more than powerful enough for many uses.  When it
comes to obsolete technology, you may have more choices - and
of course next year's 5 year old technology will be even more
powerful.  Yes, 5 years from now, there will only be 30 or so
plants with 2008 technology - but the stuff needed to build such a
plant will be available used, or as cheap versions of newer stuff,
so building your own will be much more practical.)

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Ed Gerck

Leichter, Jerry wrote:

I suspect the only heavy-weight defense is the same one we use against
the Trusting Trust hook-in-the-compiler attack:  Cross-compile on
as many compilers from as many sources as you can, on the assumption
that not all compilers contain the same hook. 
...

Of course, you'd end up with a machine no faster than your slowest
chip, and you'd have to worry about the correctness of the glue
circuitry that compares the results. 


Each chip does not have to be 100% independent, and does not have to 
be used 100% of the time.


Assuming a random selection of both outputs and chips for testing, and 
a finite set of possible outputs, it is possible to calculate what 
sampling ratio would provide an adequate confidence level -- a good 
guess is 5% sampling.


This should not create a significant impact on average speed, as 95% 
of the time the untested samples would not have to wait for 
verification (from the slower chips). One could also trust-certify 
each chip based on its positive, long term performance -- which could 
allow that chip to run with much less sampling, or none at all.


In general, this approach is based on the properties of trust when 
viewed in terms of Shannon's IT method, as explained in [*]. Trust is 
seen not as a subjective property, but as something that can be 
communicated and measured. One of the resulting rules is that trust 
cannot be communicated by self-assertions (ie, asking the same chip) 
[**]. Trust can be positive (what we call trust), negative (distrust), 
and zero (atrust -- there is no trust value associated with the 
information, neither trust nor distrust). More in [*].


Cheers,
Ed Gerck

 References:
[*] www.nma.com/papers/it-trust-part1.pdf
www.mcwg.org/mcg-mirror/trustdef.htm

[**] Ken's paper title (op. cit.) is, thus, identified to be part of 
the very con game described in the paper.


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Perry E. Metzger

Ed Gerck [EMAIL PROTECTED] writes:
 Each chip does not have to be 100% independent, and does not have to
 be used 100% of the time.

 Assuming a random selection of both outputs and chips for testing, and
 a finite set of possible outputs, it is possible to calculate what
 sampling ratio would provide an adequate confidence level -- a good
 guess is 5% sampling.

Not likely.

Sampling will not work. Sampling theory assumes statistical
independence and that the events that you're looking for are randomly
distributed. We're dealing with a situation in which the opponent is
doing things that are very much in violation of those assumptions.

The opponent is, on very very rare occasions, going to send you a
malicious payload that will do something bad. Almost all the time
they're going to do nothing at all. You need to be watching 100% of
the time if you're going to catch him with reasonable confidence, but
of course, I doubt even that will work given a halfway smart attacker.

The paper itself describes reasonable ways to prevent detection on the
basis of most other obvious methods -- power utilization, timing
issues, etc, can all be patched over well enough to render the
malhardware invisible to ordinary methods of analysis.

Truth be told, I think there is no defense against malicious hardware
that I've heard of that will work reliably, and indeed I'm not sure
that one can be devised.


Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Leichter, Jerry
On Mon, 28 Apr 2008, Ed Gerck wrote:
| Leichter, Jerry wrote:
|  I suspect the only heavy-weight defense is the same one we use against
|  the Trusting Trust hook-in-the-compiler attack:  Cross-compile on
|  as many compilers from as many sources as you can, on the assumption
|  that not all compilers contain the same hook. ...
|  Of course, you'd end up with a machine no faster than your slowest
|  chip, and you'd have to worry about the correctness of the glue
|  circuitry that compares the results. 
| 
| Each chip does not have to be 100% independent, and does not have to
| be used 100% of the time.
| 
| Assuming a random selection of both outputs and chips for testing, and
| a finite set of possible outputs, it is possible to calculate what
| sampling ratio would provide an adequate confidence level -- a good
| guess is 5% sampling.
I'm not sure how you would construct a probability distribution that's
useful for this purpose.  Consider the form of one attack demonstrated
in the paper:  If a particular 64-bit value appears in a network packet,
the code will jump to the immediately succeeding byte in the packet.
Let's for the sake of argument assume that you will never, by chance,
see this 64-bit value across all chip instances across the life of the
chip.  (If you don't think 64 bits is enough to ensure that, use 128
or 256 or whatever.)  Absent an attack, you'll never see any deviation
from the theoretical behavior.  Once, during the lifetime of the system,
an attack is mounted which, say, grabs a single AES key from memory and
inserts it into the next outgoing network packet.  That should take no
more than a few tens of instructions.  What's the probability of your
catching that with any kind of sampling?

| This should not create a significant impact on average speed, as 95%
| of the time the untested samples would not have to wait for
| verification (from the slower chips). 
I don't follow this.  Suppose the system has been running for 1 second,
and you decide to compare states.  The slower system has only completed
a tenth of the instructions completed by the faster.  You now have to
wait .9 seconds for the slower one to catch up before you have anything
to compare.

If you could quickly load the entire state of the faster system just
before the instruction whose results you want to compare into the
slower one, you would only have to wait one of the slower systems's
instruction times - but how would you do that?  Even assuming a
simple mapping between the full states of disparate systems, the
state is *huge* - all of memory, all the registers, hidden information
(cache entries, branch prediction buffers).  Yes, only a small amount
of it is relevant to the next instruction - but (a) how can you
find it; (b) how can you find it *given that the actual execution of
the next instruction may be arbitrarily different from what the
system model claims*?

|   One could also trust-certify
| each chip based on its positive, long term performance -- which could
| allow that chip to run with much less sampling, or none at all.
Long term-performance against a targetted attack means nothing.

| In general, this approach is based on the properties of trust when
| viewed in terms of Shannon's IT method, as explained in [*]. Trust is
| seen not as a subjective property, but as something that can be
| communicated and measured.  One of the resulting rules is that trust
| cannot be communicated by self-assertions (ie, asking the same chip)
| [**]. Trust can be positive (what we call trust), negative (distrust),
| and zero (atrust -- there is no trust value associated with the
| information, neither trust nor distrust). More in [*].
The papers look interesting and I'll have a look at them, but if you
want to measure trust, you have to have something to start with.  What
we are dealing with here is the difference between a random fault and
a targetted attack.  It's quite true that long experience with a chip
entitles you to trust that, given random data, it will most likely
produce the right results.  But no amount of testing can possibly
lead to proper trust that there isn't a special value that will induce
different behavior.
-- Jerry

| Cheers,
| Ed Gerck
| 
|  References:
| [*] www.nma.com/papers/it-trust-part1.pdf
| www.mcwg.org/mcg-mirror/trustdef.htm
| 
| [**] Ken's paper title (op. cit.) is, thus, identified to be part of
| the very con game described in the paper.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Ed Gerck

Perry E. Metzger wrote:

Ed Gerck [EMAIL PROTECTED] writes:

Each chip does not have to be 100% independent, and does not have to
be used 100% of the time.

Assuming a random selection of both outputs and chips for testing, and
a finite set of possible outputs, it is possible to calculate what
sampling ratio would provide an adequate confidence level -- a good
guess is 5% sampling.


Not likely.

Sampling will not work. Sampling theory assumes statistical
independence and that the events that you're looking for are randomly
distributed. 


Provided you have access to enough chip diversity so as to build a 
correction channel with sufficient capacity, Shannon's Tenth Theorem 
assures you that it is possible to reduce the effect of bad chips on 
the output to an error rate /as close to zero/ as you desire. There is 
no lower, limiting value but zero.


Statistical independence is not required to be 100%. Events are not 
required to be randomly flat either. Sampling is required to  be 
independent, but also not 100%.



We're dealing with a situation in which the opponent is
doing things that are very much in violation of those assumptions.


The counter-point is that the existence of a violation can be tested 
within a desired confidence level, which confidence level is dynamic.



The opponent is, on very very rare occasions, going to send you a
malicious payload that will do something bad. Almost all the time
they're going to do nothing at all. You need to be watching 100% of
the time if you're going to catch him with reasonable confidence, but
of course, I doubt even that will work given a halfway smart attacker.


The more comparison channels you have, and the more independent they 
are, the harder it is to compromise them /at the same time/.


In regard to time, one strategy is indeed to watch 100% of the time 
but for random windows of certain lengths and intervals. The duty 
ratio for a certain desired detection threshold depends on the 
correction channel total capacity, the signal dynamics, and some other 
variables. Different implementations will allow for different duty 
ratios for the same error detection capability.



The paper itself describes reasonable ways to prevent detection on the
basis of most other obvious methods -- power utilization, timing
issues, etc, can all be patched over well enough to render the
malhardware invisible to ordinary methods of analysis.


Except as above; using a correction channel with enough capacity the 
problem can /always/ be solved (ie, with an error rate as close to 
zero as desired).



Truth be told, I think there is no defense against malicious hardware
that I've heard of that will work reliably, and indeed I'm not sure
that one can be devised.


As above, the problem is solvable (existence proof provided by 
Shannon's Tenth Theorem).  It is not a matter of whether it works -- 
the solution exists; it's a matter of implementation.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


defending against evil in all layers of hardware and software

2008-04-28 Thread John Denker
This is an important discussion  

The threats are real, and we need to defend against them.

We need to consider the _whole_ problem, top to bottom.  The
layers that could be subverted include, at a minimum:
 -- The cpu chip itself (which set off the current flurry of
  interest).
 -- The boot rom.
 -- The BIOS code that lives on practically every card plugged
  into the PCI bus.
 -- Board-level stuff like memory controllers and I/O bridges.
 -- The operating system.
 -- Compilers, as Uncle Ken pointed out.
  http://www.ece.cmu.edu/~ganger/712.fall02/papers/p761-thompson.pdf
 -- Your secure application.
 -- Users.

As a particular example where PCs that we might wish to be secure 
are known to be under attack, consider electronic voting machines.  
In most cases there's a PC in there, running Windows CE.  Some
application software was provided by people with felony fraud
convictions.  Means, motive, and opportunity are all present.
There is ample evidence that problems have occurred.  These
are not confined to the Florida fiasco in 2000.  An example from 
2004 is the voting machine in Franklin County, Ohio that recorded 
4,258 votes for Bush when only 638 voters showed up.
  http://www.truthout.org/Conyersreport.pdf

This should not be an occasion for idly wringing our hands, nor 
sticking our head in the sand, nor looking under the lamp-post 
where the looking is easy.  We need to look at all of this stuff.  
And we can.  We are not defenseless.

As in all security, we need not aim for absolute security.  An 
often-useful approach is to do things that raise the cost to 
the attacker out of proportion to the cost to the defender.

For software, for firmware, and to some extent even for silicon
masks, SCM (source code management) systems, if used wisely, can
help a lot.  Consider for example a policy every delta to the 
software to be submitted by one person and tested by another
before being committed to the main branch of the project.  Both
the submitter and the tester would digitally sign the delta.  
This creates a long-tailed liability for anybody who tries to
sneak in a trojan  This is AFAIK the simplest defense against
high-grade attacks such as Ken's, which leave no long-term trace
in the source code (because the trojan is self-replicating).  
The point is that there is a long-term trace in the SCM logs.
We can make the logs effectively permanent and immutable.

Of course we should insist on an open-source boot ROM code:
  http://www.coreboot.org/Welcome_to_coreboot
The boot ROM should check the pgp signature of each PCI card's
BIOS code before letting it get control.  And then it should
check the pgp signature of the operating system before booting 
it.  I don't know of any machine that actually does this, but
it all seems perfectly doable.

Another line of defense involves closing the loop.  For example,
one could in principle find Ken's trojan by disassembling the
compiler and looking for code that doesn't seem to belong.
I have personally disassembled a couple of operating systems
(although this was back when operating systems were smaller
than they are now).

We can similarly close the loop on chips.  As others have pointed
out, silicon has no secrets.  A cost-effective way to check for 
trojans would be to buy more voting machines than necessary, and 
choose /at random/ a few of them to be torn down for testing.
(This has obvious analogies to sampling methods used in many
crypto algorithms.)  For starters, we grind open the CPU chips
and check that they are all the same.  That's much easier than
checking the detailed functionality of each one.  And we check
that the CPUs in the voting machines are the same as CPUs from 
another source, perhaps WAL*MART, on the theory that the attacker
finds it harder to subvert all of WAL*MART than to subvert just
a truckload of voting machines.

Checksumming the boot ROM in the torn-down machine is easy.  I
emphasize that we should *not* rely on asking a running machine
to checksum its own ROMs, because it is just too easy to subvert
the program that calculates the checksum.  To defend against
this, we tear down multiple machines, and give one randomly
selected ROM to the Democratic pollwatchers, one to the Republican 
pollwatchers, et cetera.  This way nobody needs to trusty anybody
else;  each guy is responsible for making sure _his_ checksummer
is OK.

All of this works hand-in-glove with old-fashioned procedural
security and physical security.  As the saying goes, if you
don't have physical security, you don't have security.  But
the converse is true, too:  Placing armed guards around a vault 
full of voting machines doesn't make the machines any less buggy 
than they were when they went into the vault. That's why we need 
a balanced approach that gets all the layers to work together.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL 

Re: Designing and implementing malicious hardware

2008-04-28 Thread Perry E. Metzger

Ed Gerck [EMAIL PROTECTED] writes:
 Perry E. Metzger wrote:
 Ed Gerck [EMAIL PROTECTED] writes:
 Each chip does not have to be 100% independent, and does not have to
 be used 100% of the time.

 Assuming a random selection of both outputs and chips for testing, and
 a finite set of possible outputs, it is possible to calculate what
 sampling ratio would provide an adequate confidence level -- a good
 guess is 5% sampling.

 Not likely.

 Sampling will not work. Sampling theory assumes statistical
 independence and that the events that you're looking for are randomly
 distributed. 

 Provided you have access to enough chip diversity so as to build a
 correction channel with sufficient capacity, Shannon's Tenth Theorem
 assures you that it is possible to reduce the effect of bad chips on
 the output to an error rate /as close to zero/ as you desire. There is
 no lower, limiting value but zero.

No. It really does not. Shannon's tenth theorem is about correcting
lossy channels with statistically random noise. This is about making
sure something bad doesn't happen to your computer like having someone
transmit blocks of your hard drive out on the network. I assure you
that Shannon's theorem doesn't speak about that possibility. The two
are not really related. It would be wonderful if they were, but they
aren't. Indeed, Shannon's tenth theorem doesn't even hold for error
correction if the noise on the channel is produced by an adversary
rather than being random.

I'm not particularly inclined to argue this at length.

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Ed Gerck

Perry E. Metzger wrote:

No. It really does not. Shannon's tenth theorem is about correcting
lossy channels with statistically random noise. This is about making
sure something bad doesn't happen to your computer like having someone
transmit blocks of your hard drive out on the network. I assure you
that Shannon's theorem doesn't speak about that possibility. 


Yet, Shannons' tenth theorem can be proven without a hypothesis that 
noise is random, or that the signal is anything in particular.


Using intuition, because no formality is really needed, just consider 
that the noise is a well-defined sinus function. The error-correcting 
channel provides the same sinus function in counter phase. You will 
see that the less random the noise is, the easier it gets. Not the 
other around.


How about an active adversary? You just need to consider the 
adversary's reaction time and make sure that the error-correcting 
channel has enough capacity to counter-react within that reaction 
time. For chip fabrication, this may be quite long.


Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Designing and implementing malicious hardware

2008-04-28 Thread Perry E. Metzger

Ed Gerck [EMAIL PROTECTED] writes:
 Perry E. Metzger wrote:
 No. It really does not. Shannon's tenth theorem is about correcting
 lossy channels with statistically random noise. This is about making
 sure something bad doesn't happen to your computer like having someone
 transmit blocks of your hard drive out on the network. I assure you
 that Shannon's theorem doesn't speak about that possibility. 

 Yet, Shannons' tenth theorem can be proven without a hypothesis that
 noise is random, or that the signal is anything in particular.

Not quite. If I inject noise into a channel in the right way, I can
completely eradicate the signal. For example, I can inject a different
signal of exactly opposite phase.

However, in any case, this doesn't matter. We're not talking about
receiving a signal without errors at all. We're talking about assuring
that your microprocessor possesses no features such that it does
something evil, and that something can be completely in addition to
doing the things that you expect it to do, which it might continue to
do without pause.

Lets be completely concrete here. Nothing you have suggested would
work against the described attack in the paper AT ALL. You cannot find
evil chips with statistical sampling because you don't know what to
look for, and you can't detect them by running them part of the time
against good chips because they only behave evilly once in a blue moon
when the attacker chooses to have them behave that way. Indeed, I
don't even see how someone who had read the paper could suggest what
you have -- it makes no sense in context.

And with that, I'm cutting off this branch of the conversation.

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Doubts about efficiency of Shor's factoring algorithm in quantum computers

2008-04-28 Thread Charles McElwain
In case you missed this, since it appeared in the quantum physics 
section; but is relevant to quantum cryptography (or, cryptography on 
quantum computers.)


The argument here is that Shor's factoring algorithm is dependent on 
the Quantum Fourier Transform, which is sensitive to errors in 
quantum input states, and these errors are not capable of being 
suppressed by (current) quantum error correcting codes.


Most saliently, 1) these errors destroy the polynomial scaling of 
Shor's algorithm, and 2) for those not familiar with the QM terms, 
decoherence is roughly equivalent to reduces to classical (i.e., 
non-quantum) state behavior.


Follow-ups on this line of research will be interesting for the 
evaluation of any impact of quantum computers on cryptography, and 
even generally, since the decoherence behavior would tend to make 
quantum computers approximate improving classical computers.



From the Physics pre-print server arXiv, quantum physics section:
http://arxiv.org/abs/0804.3076

Abstract:

Operator Imprecision and Scaling of Shor's Algorithm
Authors: C. Ray Hill, George F. Viamontes
(Submitted on 18 Apr 2008)

Abstract: Shor's algorithm (SA) is a quantum algorithm for 
factoring integers. Since SA has polynomial complexity while the 
best classical factoring algorithms are sub-exponential, SA is cited 
as evidence that quantum computers are more powerful than classical 
computers. SA is critically dependent on the Quantum Fourier 
Transform (QFT) and it is known that the QFT is sensitive to errors 
in the quantum state input to it. In this paper, we show that the 
polynomial scaling of SA is destroyed by input errors to the QFT 
part of the algorithm. We also show that Quantum Error Correcting 
Codes (QECC) are not capable of suppressing errors due to operator 
imprecision and that propagation of operator precision errors is 
sufficient to severely degrade the effectiveness of SA. Additionally 
we show that operator imprecision in the error correction circuit 
for the Calderbank-Shor-Steane QECC is mathematically equivalent to 
decoherence on every physical qubit in a register. We conclude that, 
because of the effect of operator precision errors, it is likely 
that physically realizable quantum computers will be capable of 
factoring integers no more efficiently than classical computers.


Concluding Remarks:
Quantum error correction is capable of reliably suppressing 
single-qubit errors due to environmental disturbances or operator 
precision errors.  However, operator imprecision errors propagate 
and grow during the course of a quantum computation and have an 
important impact on the efficiency of the computation.  In 
particular, we have shown that operator precision errors break the 
polynomial scaling of Shor's algorithm and conclude that, in the 
presence of operator precision errors, Shor's algorithm is no more 
efficient than classical algorithms for factoring integers.  To 
demonstrate how operator precision errors propagate in practice, we 
proved that the error correction circuit for the CSS QECC is 
mathematically equivalent to applying decoherence on each physical 
quibit in a logical qubit register.  We then used simulation to show 
that this accumulated error on eqch qubit causes the probability of 
error in a CSS QECC register to increase linearly with the number of 
gates applied.


It is natural to ask whether these results have wider implications 
about the power of quantum computers relative to classical 
computers.  While the results presented in this paper do not answer 
this question definitively, it is important to note the singular 
stature of Shor's algorithm as the only quantum algorithm that 
appears to solve a classically intractable problem.  The fact that 
Shor's algorithm is not more efficient than classical algorithms 
removes the only strong evidence for the superior computational 
power of quantum computers relative to classical computers. 


--

 | || ||| || ||| || ||| || ||| || ||| || ||| || |||

Charles McElwain
33 Vernon Street
Somerville, MA 02145
617-628-5542 (home)
617-501-1591 (cell)
[EMAIL PROTECTED]

 | || ||| || ||| || ||| || ||| || ||| || ||| || |||

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: defending against evil in all layers of hardware and software

2008-04-28 Thread Perry E. Metzger

John Denker [EMAIL PROTECTED] writes:
 This is an important discussion  

 The threats are real, and we need to defend against them.

I'm not sure how to feasibly defend against such things. It would seem
to require complete control over the entire design and supply chain,
which involves so many thousands of people who could be bribed that I
have serious doubts that it can be done perfectly.

 This should not be an occasion for idly wringing our hands, nor 
 sticking our head in the sand, nor looking under the lamp-post 
 where the looking is easy.  We need to look at all of this stuff.  
 And we can.  We are not defenseless.

I'll believe that when I see feasible defenses. So far as I can tell,
if you can't trust the hardware supplier, you're meat. I don't think
it is possible even in principle to validate the hardware after the
fact. Even if you could apply formal methods successfully, it isn't
even obvious how you would specify the property of the system that
you're trying to prove. Never does anything bad is kind of nebulous
if you're doing proofs.

 As in all security, we need not aim for absolute security.  An 
 often-useful approach is to do things that raise the cost to 
 the attacker out of proportion to the cost to the defender.

Well, this sort of thing is already not that interesting to an
ordinary spammer or phisher -- they have no trouble making loads of
money without engaging in such stuff.

If you're talking about what a national government might pay to get
such a back door in hardware, though, I think that it is probably
worth billions to such an entity. After all, a decent bomber these
days costs a billion dollars, and clearly this is a lot more potent.

Given that, I don't see what would work in practice. If a major power
wanted to turn a couple of engineers at the right place in the design
or supply chain, the amount of money needed is far below the amount in
play.

 For software, for firmware, and to some extent even for silicon
 masks, SCM (source code management) systems, if used wisely, can
 help a lot.

If you have a rotten apple engineer, he will be able to hide what he's
trying to do and make it look completely legit. If he's really good,
it may not be possible to catch what he's done EVEN IN PRINCIPLE. All
an SCM can do is tell you who put the bad stuff in much after the fact
if you ever catch it at all. That's not exactly defense. It is at
best post mortem.

 Of course we should insist on an open-source boot ROM code:
   http://www.coreboot.org/Welcome_to_coreboot

Won't help. A bad apple can probably manage a sufficiently subtle flaw
that it won't be noticed by widespread code inspection. See Jerry's
earlier posting on the subject.

 Another line of defense involves closing the loop.  For example,
 one could in principle find Ken's trojan by disassembling the
 compiler and looking for code that doesn't seem to belong.

Only if the bad guy doesn't anticipate that you might do that.

I'm pretty sure we can defend against this sort of thing a lot of the
time (by no means all) if it is done by quite ordinary criminals. If
it is done by really good people, I have very serious doubts.


-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Doubts about efficiency of Shor's factoring algorithm in quantum computers

2008-04-28 Thread Perry E. Metzger

Charles McElwain [EMAIL PROTECTED] writes:
 Follow-ups on this line of research will be interesting for the
 evaluation of any impact of quantum computers on cryptography, and
 even generally, since the decoherence behavior would tend to make
 quantum computers approximate improving classical computers.

Very interesting indeed. I'd be curious about the opinions of people
who know the field well. My QM and quantum computing knowledge aren't
quite up to the task of analyzing the paper.

 From the Physics pre-print server arXiv, quantum physics section:
 http://arxiv.org/abs/0804.3076

Perry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


SSL and Malicious Hardware/Software

2008-04-28 Thread Ryan Phillips
Matt's blog post [1] gets to the heart of the matter of what we can trust.

I may have missed the discussion, but I ran across Netronome's 'SSL
Inspector' appliance [2] today and with the recent discussion on this list
regarding malicious hardware, I find this appliance appalling. 

Basically a corporation can inject a SSL Trusted CA key in the keystore
within their corporate operating system image and have this device generate
a new server certificate to every SSL enabled website, signed by the
Trusted CA, and handed to the client.  The client does a validation check
and trusts the generated certificate, since the CA is trusted.  A very nice
man-in-the-middle and would trick most casual computer users.

I'm guessing these bogus certificates can be forged to look like the real
thing, but only differ by the fingerprint and root CA that was used to sign
it.  

What are people's opinions on corporations using this tactic?  I can't
think of a great way of alerting the user, but I would expect a pretty
reasonable level of privacy while using an SSL connection at work.  

Regards,
Ryan

[1] http://www.crypto.com/blog/hardware_security/
[2] http://www.netronome.com/web/guest/products/ssl_appliance

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Just update the microcode (was: Re: defending against evil in all layers of hardware and software)

2008-04-28 Thread John Ioannidis
Intel and AMD processors can have new microcode loaded to them, and this 
is usually done by the BIOS.  Presumably there is some asymmetric crypto 
involved with the processor doing the signature validation.


A major power that makes a good fraction of the world's laptops and 
desktops (and hence controls the circuitry and the BIOS, even if they do 
not control the chip manufacturing process) would be in a good place to 
introduce problems that way, no?


/ji

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]