Hash: SHA1

On Sep 6, 2013, at 6:23 AM, Jerry Leichter <leich...@lrw.com> wrote:

> Is such an attack against AES *plausible*?  I'd have to say no.  But if you 
> were on the stand as an expert witness and were asked under cross-examination 
> "Is this *possible*?", I contend the only answer you could give is "I suppose 
> so" (with tone and body language trying to signal to the jury that you're 
> being forced to give an answer that's true but you don't in your gut believe 
> it).

I'd be happy to give a different answer, like -- almost certainly not.

> Could an encryption algorithm be explicitly designed to have properties like 
> this?  I don't know of any, but it seems possible.  I've long suspected that 
> NSA might want this kind of property for some of its own systems:  In some 
> cases, it completely controls key generation and distribution, so can make 
> sure the system as fielded only uses "good" keys.  If the algorithm leaks 
> without the key generation tricks leaking, it's not just useless to whoever 
> grabs onto it - it's positively hazardous.  The gun that always blows up when 
> the bad guy tries to shoot it....

We know as a mathematical theorem that a block cipher with a back door *is* a 
public-key system. It is a very, very, very valuable thing, and suggests other 
mathematical secrets about hitherto unknown ways to make fast, secure public 
key systems. 

To me, it's like getting a cheap supply of gold and then deciding you'll make 
bullets out of it instead of lead. To riff on that analogy, it feels like 
you're suggesting that they would shoot themselves in the foot because they 
know that the bullet fragments will hurt their opponent.

That's why I say almost certainly not. It suggests irrationality beyond my 
personal ken. It's something I classify colloquially as "too stupid to live."

My assumptions about the NSA are that they're smart, clever, and practical. 
Conjectures about their behavior that deviate from any of those axes ring false 
to the degree that they deviate from that.

My conjectures start with assuming they're at least as smart as me, and I start 
with "what would I do if I were them?" I think they're smart enough not to 
attack the strong points of the system, but the weak points. I think they're 
smart enough to prefer operating in stealth.

Yeah, yeah, sure, if with those resources I stumbled into a fundamental 
mathematical advantage, I'd use it. But I would use it to maximize my gain, not 
to be gratuitously sneaky.

The math we know about block ciphers suggests (not proves, suggests) that a 
back door in a cipher is impractical, because it would imply the holy grail of 
public key systems -- fast, secure, public key crypto. It suggests secure 
trapdoor functions that can be made out of very simple components.

If I found one, it would be great, but I'd devote my resources to places where 
I technology is on my side. Those include network security and software 
security, along with traffic analysis.

If I wanted to devote research resources, I'd be looking closely at 
language-theoretic security. I'd be paying close attention to the fantastic 
things that have come out of there.

The stuff that Bangert, Bratus, Shapiro, and Smith did on turning an MMU into a 
Turing machine is where I'd devote research, as well as their related work on 
"weird machines."

I apologize for repeating myself, but I'd fight the next war, not the last one.


Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

The cryptography mailing list

Reply via email to