Re: Who cares about side-channel attacks?

2008-10-25 Thread Peter Gutmann
Thierry Moreau [EMAIL PROTECTED] writes:

I find the question should be refined.

It could if there was a large enough repondent base to draw samples from :-). 
This is one of those surveys that can never be done because no vendor will 
publicly talk to you about security measures in their embedded systems.

In fact none of the people/organisations I queried about this fitted into any 
of the proposed categories, it was all embedded devices, typically SCADA 
systems, home automation, consumer electronics, that sort of thing, so it was 
really a single category which was Embedded systems.  Given the string of 
attacks on crypto in embedded devices (XBox, iPhone, iOpener, Wii, some 
not-yet-published ones on HDCP devices :-), etc) this is by far the most 
at-risk category because there's a huge incentive to attack them, the result 
affects tens/hundreds of millions of devices, and the attacks are immediately 
and widely actively exploited (modchips/device unlocking/etc, an important 
difference between this and academic proof-of-concept attacks), so this is the 
one where I'd expect the vendors to care most.

Also, for organizations mandated to comply with IT security 
certification/guidelines/best-practice, a risk analysis is performed to 
keep the auditor at bay, in which SCA protection has very little chance 
of even merely being mentioned. How can the SCA protection mechanism fit 
the risk analysis discipline? I.e., is it possible to even define SCA 
protection in a way that might trigger interest from security 
consultants or their clients?

Actually that's a special case, or more generally having certification/ 
auditing requirements (which a private-email responder also mentioned) is a 
special case in that the risk analysis is now if I don't do this I don't get 
sign-off rather than it makes good security sense to do this so we'll do 
it.  In the immortal words of the Bastard Operator from Hell, when you have 
the audit/certification gun pointed at someone's head you can pretty much 
[get them to] to run naked across campus with a power-cord rammed up [their] 
backside and they'd do it not because they thought it was a terribly good 
idea but because they had a gun pointed at their head.

An associated problem with this is that if vendors are motivated solely by 
checkbox requirements then they'll often ship the product in a non-approved 
mode (coughFIPS140cough) to reduce manufacturing or support costs/increase 
performance/increase ease of use/whatever.  It's a nasty catch-22, hold a gun 
to someone's head and they'll only do what you tell them for as long as the 
gun is applied.

Getting back to the SCADA/home automation/consumer electronics embedded 
market, the only certification that applies is the likes of FCC Class B, ROHS, 
CE, and UL.  This is why I was interested in finding cases (or 
counterexamples) of informed-consent use of SCA countermeasures, because in 
the general embedded-systems case vendor cost/benefit analysis is the only 
deciding factor on whether it gets used or not, and vendors seem to be 
deciding (from my own experience and some private-email replies) that it's not 
worth it.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: combining entropy

2008-10-25 Thread John Denker
On 10/24/2008 03:40 PM, Jack Lloyd wrote:

 Perhaps our seeming disagreement is due to a differing interpretation
 of 'trusted'. I took it to mean that at least one pool had a
 min-entropy above some security bound. You appear to have taken it to
 mean that it will be uniform random?

Thanks, that question advances the discussion.

The answer, however, is no, I did not assume 100% entropy
density.  Here is the critical assumption that I did make:

We consider the scenario where we started with N randomness
generators, but N-1 of them have failed.  One of them is
still working, but we don't know which one.

To say the same thing in more detail:  Suppose we start
with N generators, each of which puts out a 160 bit word
containing 80 bits of _trusted_ entropy.  That's a 50%
entropy density.

Here _trusted_ means we have a provable lower bound on the
entropy.  I assume this is the same as the aforementioned
min-entropy above some security bound.

We next consider the case where N-1 of the generators have 
failed, or can no longer be trusted, which is essentially the
same thing for present purposes.  Now we have N-1 generators 
putting out zero bits of trusted entropy, plus one generator 
putting out 80 bits of trusted entropy.  I emphasize that
these 80 bits of trusted entropy are necessarily uncorrelated
with anything happening on the other N-1 machines, for the
simple reason that they are uncorrelated with anything 
happening anywhere else in the universe ... otherwise they
would not qualify as trusted entropy.

XORing together all N of the 160 bit output words produces
a single 160 bit word containing 80 bits of trusted entropy.
Therefore, unless there is some requirement or objective
that I don't know about, the previously-stated conclusion
holds:

 XOR is a good-enough combining function,
 and nothing else would be any better.

XOR is provably correct because it is _reversible_ in the 
thermodynamic sense.  That means it cannot increase or 
decrease the entropy.

=

Obviously this numerical example generalizes to any entropy
density from zero to 100% inclusive.

To summarize:  The key assumptions are that we have N-1
broken generators and one working generator.  We don't
know which one is working, but we know that it is working 
correctly.



For more about the theory and practice of high-entropy
randomness generators, see
  http://www.av8n.com/turbid/

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: combining entropy

2008-10-25 Thread IanG
Jonathan Katz wrote:
 I think it depends on what you mean by N pools of entropy.


I can see that my description was a bit weak, yes.  Here's a better
view, incorporating the feedback:

   If I have N people, each with a single pool of entropy,
   and I pool each of their contributions together with XOR,
   is that as good as it gets?

My assumptions are:

 * I trust no single person and their source of entropy.

 * I trust at least one person + pool.

 * Entropy by its definition is independent and is private
   (but it is worth stating these, as any leaks will kill us!)

 * Efficiency is not a concern, we just expand the pool size
   (each pool is size X, and the result is size X).

 * The people have ordinary skill.



now to respond to the questions:


1.  I am assuming that at least one pool is good entropy.  This is
partly an assumption of desperation or simplicity.

In practice, no individual (source or person) is trusted at an
isolated level.  But this leads to a sort of circular argument that
says, nobody is trusted.  We can solve this two ways:

I join the circle.  I trust myself, *but* I don't trust
my source of entropy.  So this is still hopeful.

We ensure that there are at least two cartels in the
circle that don't trust each other!  Then, add a dash
of game theory, and the two cartel pools should at
least be independent of each other, and therefore the
result should be good entropy.

I suspect others could more logically arrive at a better assumption,
but for now, the assumption of one trusted person/pool seems to
cover it.

2.  Having thought about Stephan's comment a bit more (because it
arrived first), and a bit more about John D's entropy comments
(because they were precise), it is clear that I need to stress the
privacy / independence criteria, even if strictly covered by the
definition of entropy.  Too much of the practical aspects will
depend on ensuring independence of the pools to just lean blithely
on the definitions.  I had missed that dependency.

3.  The proposals on concatenation and cleanup are tempting.  In
Jon's words, it can solve obvious problems.  However, they introduce
a complexity of understanding the cleanup function, and potential
for failures.  Jack's tradeoffs.  This has made me realise the last
assumption, now added:

   The people have ordinary skill.

Which means they are unable to determine whether a cryptographically
complex cleanup function is indeed cleaning, or not.

Here, then, we reach an obvious limit, in that the people have to be
able to determine that the XOR is doing its job, and they need to be
able to do a bit of research to decide what is their best guess at
their private entropy source.



Thanks to all.

iang


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Cube cryptanalysis?

2008-10-25 Thread James Muir
Paul Hoffman wrote:
 At 11:08 AM -0700 8/21/08, Greg Rose wrote:
 Adi mentioned that the slides and paper will go online around the
 deadline for Eurocrypt submission; it will all become much clearer
 than my wounded explanations then.

 There now: http://eprint.iacr.org/2008/385


Given all the excitement over the Cube attack, readers may be interested
to have a closer look at an earlier paper by Vielhaber:

Breaking ONE.FIVIUM by AIDA (an Algebraic IV Differential Attack)
Michael Vielhaber
http://eprint.iacr.org/2007/413

Vielhaber claims that AIDA anticipates the Cube attack; see his post on
the iacr eprint forum:

http://eprint.iacr.org/forum/read.php?8,59

-James

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]