On Sep 30, 2013, at 4:16 AM, ianG <i...@iang.org> wrote:
> I'm not really understanding the need for checksums on keys.  I can sort of 
> see the battlefield requirement that comms equipment that is stolen can't 
> then be utilized in either a direct sense (listening in) or re-sold to some 
> other theater.
I'm *guessing* that this is what checksums are for, but I don't actually 
*know*.  (People used to wonder why NSA asked that DES keys be checksummed - 
the original IBM Lucifer algorithm used a full 64-bit key, while DES required 
parity bits on each byte.  On the one hand, this decreased the key size from 64 
to 56 bits; on the other, it turns out that under differential crypto attack, 
DES only provides about 56 bits of security anyway.  NSA, based on what we saw 
in the Clipper chip, seems to like running crypto algorithms "tight":  Just as 
much effective security as the key size implies, exactly enough rounds to 
attain it, etc.  So *maybe* that was why they asked for 56-bit keys.  Or maybe 
they wanted to make brute force attacks easier for themselves.)

> But it still doesn't quite work.  It seems antithetical to NSA's obsession 
> with security at Suite A levels, if they are worried about the gear being 
> snatched, they shouldn't have secret algorithms in them at all.
This reminds me of the signature line someone used for years:  A boat in a 
harbor is safe, but that's not what boats are for.  In some cases you need to 
communicate securely with someone who's "in harm's way", so any security device 
you give him is also "in harm's way".  This is hardly a new problem.  Back in 
WW I, code books on ships had lead covers and anyone who had access to them had 
an obligation to see they were tossed overboard if the ship was about to fall 
into enemy hands.  Attackers tried very hard to get to the code book before it 
could be tossed.

Embassies need to be able to communicate at very high levels of security.  They 
are normally considered quite secure, but quiet attacks against them do occur.  
(There are some interesting stories of such things in Peter Wright's 
Spycatcher, which tells the story of his career in MI5.  If you haven't read it 
- get a copy right now.)  And of course people always look at the seizure of 
the US embassy in Iran.  I don't know if any crypto equipment was compromised, 
but it has been reported that the Iranians were able, by dint of a huge amount 
of manual labor, to piece back together shredded documents.  (This lead to an 
upgrade of shredders not just by the State Department but in the market at 
large, which came to demand cross-cut shredders, which cut the paper into 
longitudinal strips, but then cut across the strips to produce pieces no more 
than an inch or so long.  Those probably could be re-assembled using 
computerized techniques - originally developed to re-assemble old parchm
 ents like the Dead Sea Scrolls.)

Today, there are multiple layers of protection.  The equipment is designed to 
zero out any embedded keys if tampered with.  (This is common even in the 
commercial market for things like ATM's.)  A variety of techniques are used to 
make it hard to reverse-engineer the equipment.  (In fact, even FIPS 
certification of hardware requires some of these measures.)  At the extreme, 
operators of equipment are supposed to destroy it to prevent its capture.  
(There was a case a number of years back of a military plane that was forced by 
mechanical trouble to land in China.  A big question was how much of the 
equipment had been destroyed.  There are similar cases even today with ships, 
in which people on board take axes to the equipment.)

> Using checksums also doesn't make sense, as once the checksum algorithm is 
> recovered, the protection is dead.
The hardware is considered hard to break into, and one hopes it's usually 
destroyed.  The military, and apparently the NSA, believes in defense in depth. 
 If someone manages to get the checksum algorithm out, the probably have the 
crypto algorithm, too.

> I would have thought a HMAC approach would be better, but this then brings in 
> the need for a centralised key distro approach.
Why HMAC?  If you mean a keyed MAC ... it's not better.   But a true signature 
would mean that even completely breaking a captured device doesn't help you 
generate valid keys.  (Of course, you can modify the device - or a cloned copy 
- to skip the key signature check - hence my question as to whether one could 
create a crypto system that *inherently* had the properties that signed keys 
naively provide.)

>  Ok, so that is typically how battlefield codes work -- one set for everyone 
> -- but I would have thought they'd have moved on from the delivery SPOF by 
> now.
In a hierarchical organization, centralized means of control are considered 
important.  There was an analysis of the (bad) cryptography in "secure" radios 
for police and fire departments, and it mainly relied on distribution of keys 
from a central source.

> ...It also seems a little overdone to do that in the algorithm.  Why not 
> implement a kill switch with a separate parallel system?  If one is designing 
> the hardware, then one has control over these things.
Defense in depth?  Seriously, I don't *know* what NSA does - I can only observe 
what they let out.

> I guess then I really don't understand the threat they are trying to address 
> here.
I proposed the challenge of a "secure against use of unauthorized keys" crypto 
system as a theoretical challenge.  I don't know how to do it; I don't know if 
it can be done.  Perhaps if it can be done people will find unexpected uses for 
                                                        -- Jerry

The cryptography mailing list

Reply via email to