On Sep 28, 2013, at 3:06 PM, ianG wrote:
>> Problem with the NSA is that its Jekyll and Hyde. There is the good side
>> trying to improve security and the dark side trying to break it. Which
>> side did the push for EC come from?
> What's in Suite A? Will probably illuminate that question...
The actual algorithms are classified, and about all that's leaked about them,
as far as I can determine in a quick search, is the names of some of them, and
general properties of a subset of those - e.g., according to Wikipedia, BATON
is a block cipher with a key length of 320 bits (160 of them checksum bits -
I'd guess that this is an overt way for NSA to control who can use stolen
equipment, as it will presumably refuse to operate at all with an invalid key).
It looks as if much of this kind of information comes from public descriptions
of equipment sold to the government that implements these algorithms, though a
bit of the information (in particular, the name BATON and its key and block
sizes) has made it into published standards via algorithm specifiers. cryptome
has a few leaked documents as well - again, one showing BATON mentioned in
Congressional testimony about Clipper.
Cryptographic challenge: If you have a sealed, tamper-proof box that
implements, say, BATON, you can easily have it refuse to work if the key
presented doesn't checksum correctly. In fact, you'd likely have it destroy
itself if presented with too many invalid keys. NSA has always been really big
about using such sealed modules for their own algorithms. (The FIPS specs were
clearly drafted by people who think in these terms. If you're looking at them
while trying to get software certified, many of the provisions look very
peculiar. OK, no one expects your software to be potted in epoxy ("opaque in
the ultraviolet" - or was it infrared?); but they do expect various kinds of
isolation that just affect the blocks on a picture of your software's
implementation; they have no meaningful effect on security, which unlike
hardware can't enforce any boundaries between the blocks.)
Anyway, this approach obviously depends on the ability of the hardware to
resist attacks. Can one design an algorithm which is inherently secure against
such attacks? For example, can one design an algorithm that's strong when used
with valid keys but either outright fails (e.g., produces indexes into
something like S-boxes that are out of range) or is easily invertible if used
with invalid keys (e.g., has a key schedule that with invalid keys produces all
0's after a certain small number of rounds)? You'd need something akin to
asymmetric cryptography to prevent anyone from reverse-engineering the checksum
algorithm from the encryption algorithm, but I know of no fundamental reason
why that couldn't be done.
-- Jerry
_______________________________________________
The cryptography mailing list
[email protected]
http://www.metzdowd.com/mailman/listinfo/cryptography