On Sep 7, 2013, at 11:45 PM, John Kelsey wrote:

> Let's suppose I design a block cipher such that, with a randomly generated 
> key and 10,000 known plaintexts, I can recover that key.... At this point, 
> what I have is a trapdoor one-way function.  You generate a random key K and 
> then compute E(K,i) for i = 1 to 10000.  The output of the one-way function 
> is the ciphertext.  The input is K.  If nobody can break the cipher, then 
> this is a one-way funciton.  If only I, who designed it, can break it, then 
> it's a trapdoor one-way function.... At this point, I have a perfectly fine 
> public key encryption system.  To send me a message, choose a random K, use 
> it to encrypt 1 through 10000, and then send me the actual message encrypted 
> after that in K.  If nobody but me can break the system, then this cipher 
> works as my public key.
OK, let's look at this another way.  The broader argument being made here 
breaks down into three propositions:

1.  If you have a way to "spike" a block cipher based on embedding a secret in 
it, you can a way to create something with the formal properties of a public 
key cryptosystem - i.e., there is a function E(P) which anyone can compute on 
any plaintext P, but given E(P), only you can invert to recover P.

2.  Something with the formal properties of a public key cryptosystem can be 
used as a *practical* public key cryptosystem.

3.  A practical public-key cryptosystem is much more valuable than a way to 
embed a secret in a block cipher, so if anyone came up with the latter, they 
would certainly use it to create the former, as it's been "the holy grail" of 
cryptography for many years to come up with a public key system that didn't 
depend on complex mathematics with uncertain properties.

If we assume these three propositions, and looks around us and observe the lack 
of the appropriate kinds of public key systems, we can certainly conclude that 
no one knows how to embed a secret in a block cipher.

Proposition 1, which is all you specifically address, is certainly true.  I 
claim that Propositions 2 and 3 are clearly false.

In fact, Proposition 3 isn't even vaguely mathematical - it's some kind of 
statement about the values that cryptographers assign to different kinds of 
primitives and to publication.  It's quite true that if anyone in the academic 
world were to come up with a way to create a practical public key cryptosystem 
without a dependence on factoring or DLP, they would publish to much acclaim.  
(Of course, there *are* a couple of such systems known - they were published 
years ago - but no one uses them for various reasons.  So "acclaim" ... well, 
maybe.)  Then again, an academic cryptographer who discovered a way to hide a 
secret in a block cipher would certainly publish - it would be really 
significant work.  So we never needed this whole chain of propositions to begin 
with:  It's self-evidently true that no one in the public community knows how 
to embed a secret in a block cipher.

But ... since we're talking *values*, what are NSA's values?  Would *they* have 
any reason to publish if they found a way to embed a secret in a block cipher? 
Hell, no!  Why would they want to give away such valuable knowledge?  Would 
they produce a private-key system based on their breakthrough?  Maybe, for 
internal use.  How would we ever know?

But let's talk mathematics, not psychology and politics.  You've given a 
description of a kind of back door that *would* produce a practical public key 
system.  But I've elsewhere pointed out that there are all kinds of back doors. 
 Suppose that my back door reduces the effective key size of AES to 40 bits.  
Even 20+ years ago, NSA was willing to export 40-bit crypto; presumably they 
were willing to do the brute-force computation to break it.  Today, it would be 
a piece of cake.  But would a public-key system that requires around 2^40 
operations to encrypt be *practical*?  Even today, I doubt it.  And if you're 
willing to do 2^40 operations, are you willing to do 2^56?  With specialized 
hardware, that, too, has been easy for years.  NSA can certainly have that 
specialized hardware for code breaking - will you buy it for encryption?

> The assumption that matters here is that you know enough cryptanalysis that 
> it would be hard to hide a practical attack from you.  If you don't know 
> about differential cryptanalysis, I can do the master key cryptosystem, but 
> only until you learn about it, at which point you will break my cipher.
In fact, this is an example I was going to give:  In a world in which 
differential crypto isn't known, it *is* a secret that's a back door.  Before 
DC was published, people seriously proposed strengthening DES by using a 
448-bit (I think that's the number) key - just toss the round key computation 
mechanism and provide all the keying for all the rounds.  If that had been 
widely used, NSA would have been able to break it use DC.

Of course we know about DC.  But the only reason RSA is safe is that we don't 
know how to factor quickly!  (Actually, even that's not quite true - after all 
these years, as far as I know, we *still* haven't managed to show that RSA is 
as hard as factoring - only the obvious fact that it's no harder.  It would be 
an incredible and unexpected result to separate the problems, but it *could* 
happen.)  It happens that in the case of RSA, we can point to *a particular 
easy to state problem* that, if solved, would break the system.  Things are 
much more nebulous for block ciphers, but that shouldn't be surprising:  They 
don't have simple, clean mathematical structures.  (In fact, when Rijndael was 
first proposed, there was some concern that it was "too clean" - that its 
relatively simple structure would provide traction for mathematical attacks.  
It doesn't seem to have worked out that way.)

There are, in fact, even closer analogues between potential RSA weaknesses and 
DC weaknesses.  What DC tells us is that certain S boxes lead to weak ciphers, 
even if the general structure is otherwise sound.  Well ... over the years, 
we've learned that certain choices of primes lead to weak RSA, so we've 
restricted the choices we make to the "good" primes.  This is a much more 
pronounced effect with discrete log-based algorithms - and yet more so with 
elliptic curve algorithms.  Anyone who knows about such weaknesses before they 
are known to the public, and is in a position to influence how primes or curves 
are chosen, can embed a back door - a back door that will be closed as soon as 
the potential weakness becomes known.  How is this different in the public-key 
and block cipher cases?

In the case of elliptic curve algorithms, we *know* that in some cases its 
possible to embed a secret, but we don't know of any way to tell if a secret 
*actually was embedded*.  DC happens to have the property that once you know 
about it, you can check if your S-boxes are bad - but where's the proof that a 
different attack will necessarily have this property?  If such an attack were 
to emerge against AES, *mathematically*, we'd have to say, well, maybe someone 
put a back door into AES - we really don't know one way or another, just as we 
don't know, one way or the other, about the NSA-suggested elliptic curves and 
points.  In either case, we should probably come up with something new.

                                                        -- Jerry

_______________________________________________
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Reply via email to