Re: the effects of a spy

2005-11-18 Thread Jack Lloyd
On Thu, Nov 17, 2005 at 12:10:53PM -0500, John Kelsey wrote:

> c.  Maybe they just got it wrong.  SHA0 and SHA1 demonstrate that this
> is all too possible.  (It's quite plausible to me that they have very
> good tools for analyzing block ciphers, but that they aren't or
> weren't sure how to best apply them to hash functions.)  

SHA-* also look very much like the already existing and public MD4 and MD5... I
would be very willing to bet that the NSA's classified hash functions (I assume
it has some, though to be honest I have only ever seen information about block
ciphers) look nothing like SHA. Perhaps their analysis tools apply well to the
ones that they build internally, but did not to an MDx-style hash, and they did
not want to release a design based on some clever design technique of theirs
that the public didn't know about; when SHA was released, Clipper and the
export controls were still in full swing, so it seems pretty plausible that the
NSA wanted to limit how many goodies it gave away.



-Jack


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: the effects of a spy

2005-11-17 Thread John Kelsey


>From: [EMAIL PROTECTED]
>Sent: Nov 16, 2005 12:26 PM
>Subject: Re: the effects of a spy

...
>Remember Clipper?  It had an NSA-designed 80-bit encryption
>algorithm.  One interesting fact about it was that it appeared to be
>very aggressively designed.  Most published algorithms will, for
>example, use (say) 5 rounds beyond the point where differential
>cryptoanalysis stops giving you an advantage.  Clipper, on the other
>hand, falls to differential cryptoanalysis if you use even one less
>round than the specification calls for.

Nipick: The system was Clipper, the algorithm was Skipjack.  

>Why the NSA would design something so close to the edge has always
>been a bit of a mystery (well, to me anyway).  One interpretation is
>that NSA simply has a deeper understanding than outsiders of where
>the limits really are.  What to us looks like aggressive design, to
>them is reasonable and even conservative.

Three comments here:

a.  Maybe they really do have a good generic differential-probability
limiting algorithm.  There are algorithms like this in the public
world (they can be really computationally expensive, and they only
tell you upper bounds on a subset of possible attacks), and you'd
expect NSA to be interested, since they design a lot of algorithms.
It's not so intuitive to me that this would have applied to impossible
differentials unless they designed it to, though.  In that case,
you're looking at differentials that can't appear instead of
differentials that appear too often.

b.  Maybe they don't care that much about attacks that require some
huge number of plaintexts.  The academic world has defined the game in
terms of total work being the critical parameter in the attack, and
we're seeing a push over time to move that to total attack cost.
(That is, it's not so interesting if you have a 2^{100} attack on a
128-bit block cipher, if the attack is impossible to parallelize.)  If
someone publishes an attack on Twofish tomorrow which requires 2^{96}
plaintexts to break it faster than brute-force, we'll all agree that's
an attack.  But there's no reason NSA has to think that way--maybe
they have some other parameter like 2^{n/2} texts for n-bit block
ciphers, after which they don't care about attacks because they're not
practical.  

c.  Maybe they just got it wrong.  SHA0 and SHA1 demonstrate that this
is all too possible.  (It's quite plausible to me that they have very
good tools for analyzing block ciphers, but that they aren't or
weren't sure how to best apply them to hash functions.)  

...
>Or maybe ... the reasoning Perry mentions above applies here.  Any
>time you field a system, there is a possibility that your opponents
>will get hold of it.  In the case of Clipper, where the algorithm was
>intended to be published, there's no "possibility" about it.  So why
>make it any stronger than you have to?

Reducing Skipjack to 31 rounds wouldn't make a practical trapdoor
appear!  You're still talking about 2^{34} chosen plaintexts!

>   -- Jerry

--John Kelsey


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: the effects of a spy

2005-11-17 Thread Travis H.
> actually justified for cryptosystems:  It turned out, on the key escrow side
> of the protocol design, NSA actually fell over the edge, and there was a
> simple attack (Matt Blaze's work, as I recall).

Details on the so-called LEAF blower here:
http://www.crypto.com/papers/eesproto.pdf
--
http://www.lightconsulting.com/~travis/  -><-
"We already have enough fast, insecure systems." -- Schneier & Ferguson
GPG fingerprint: 50A1 15C5 A9DE 23B9 ED98 C93E 38E9 204A 94C2 641B

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: the effects of a spy

2005-11-16 Thread leichter_jerrold
On Tue, 15 Nov 2005, Perry E. Metzger wrote:
| Does the tension between securing one's own communications and
| breaking an opponents communications sometimes drive the use of COMSEC
| gear that may be "too close to the edge" for comfort, for fear of
| revealing too much about more secure methods? If so, does the public
| revelation of Suite B mean that the NSA has decided it prefers to keep
| communications secure to breaking opposition communications?
Remember Clipper?  It had an NSA-designed 80-bit encryption algorithm.  One
interesting fact about it was that it appeared to be very aggressively
designed.  Most published algorithms will, for example, use (say) 5 rounds
beyond the point where differential cryptoanalysis stops giving you an
advantage.  Clipper, on the other hand, falls to differential cryptoanalysis
if you use even one less round than the specification calls for.

Why the NSA would design something so close to the edge has always been a
bit
of a mystery (well, to me anyway).  One interpretation is that NSA simply
has a deeper understanding than outsiders of where the limits really are.
What to us looks like aggressive design, to them is reasonable and even
conservative.

Or maybe ... the reasoning Perry mentions above applies here.  Any time you
field a system, there is a possibility that your opponents will get hold of
it.  In the case of Clipper, where the algorithm was intended to be
published,
there's no "possibility" about it.  So why make it any stronger than you
have
to?

Note that it still bespeaks a great deal of confidence in your understanding
of the design to skate *that* close to the edge.  One hopes that confidence
is
actually justified for cryptosystems:  It turned out, on the key escrow side
of the protocol design, NSA actually fell over the edge, and there was a
simple attack (Matt Blaze's work, as I recall).

-- Jerry


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: the effects of a spy

2005-11-16 Thread lrk
On Tue, Nov 15, 2005 at 06:31:30PM -0500, Perry E. Metzger wrote:
> 
> "Steven M. Bellovin" <[EMAIL PROTECTED]> writes:
> > Bruce Schneier's newsletter Cryptogram has the following fascinating 
> > link: http://www.fas.org/irp/eprint/heath.pdf
> > It's the story of effects of a single spy who betrayed keys and 
> > encryptor designs.
> 
> Very interesting indeed. I was unaware that the military had such
> astonishingly bad key management practices. One wonders if things have
> actually improved.
 
Probably not. I'm an outsider listening in but what I can hear seems to
say they are no better at key management. Or crypto gear which does not 
get in the way of fast reliable tactical communications.


> One thing one hopes has changed is that one hopes that it is no longer
> necessary for everyone to share the same keying material among so many
> different endpoints. Public key cryptography and key negotiation could
> (in theory) make it unnecessary to store shared secrets for long
> periods of time before use, where they are rendered vulnerable to
> espionage. One hopes that, over the last thirty years, this or
> something analogous has been implemented.

The term "broadcast" has a special meaning in the radio world. It is by
definition one-way. Thus the "fleet broadcast" was sent to all the ships
and each picked out it's own messages. Key negotiation probably was never
practical on those circuits.

The broadcast became available via satellite sometime in the sixties. It 
was 75 baud teletype. It is still there today.


> One intriguing question that I was left with after reading the whole
> thing was not mentioned in the document at all. One portion of the
> NSA's role is to break other people's codes. However, we also have to
> assume that equipment would fall into "the wrong people's hands" at
> intervals, as happened with the Pueblo incident. If properly designed,
> the compromise of such equipment won't reveal communications, but
> there is no way to prevent it from revealing methods, which could then
> be exploited by an opponent to secure their own communications.

I doubt the top-level equipment could fall into the wrong people's hands
as it is probably not in the field. The tactical systems don't need to be
as good since the information is not useful for very long.

With any luck, the EP-3 that landed in China did not give up as much info.
The CD-ROMs for loading the computers become unreadable after a few seconds
in the microwave oven. :)

 
> Does the tension between securing one's own communications and
> breaking an opponents communications sometimes drive the use of COMSEC
> gear that may be "too close to the edge" for comfort, for fear of
> revealing too much about more secure methods? If so, does the public
> revelation of Suite B mean that the NSA has decided it prefers to keep
> communications secure to breaking opposition communications?

There is probably some level where this is considered but there is little
indication the military is not about as far behind the real world as they
have always been.

We also can hope the intel function has shifted from breaking diplomatic
and military communications to sifting out the gems from the pebbles in
the landslide of general telecomm.

And there is the problem of brainpower. The military and NSA probably have
less now than during real wars.

Note that by current standards, Alan Turing could not get a US security
clearance.



LRK

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: the effects of a spy

2005-11-16 Thread Nicholas Bohm
Perry E. Metzger wrote:
> "Steven M. Bellovin" <[EMAIL PROTECTED]> writes:
> 
>>Bruce Schneier's newsletter Cryptogram has the following fascinating 
>>link: http://www.fas.org/irp/eprint/heath.pdf
>>It's the story of effects of a single spy who betrayed keys and 
>>encryptor designs.

[...]

> One intriguing question that I was left with after reading the whole
> thing was not mentioned in the document at all. One portion of the
> NSA's role is to break other people's codes. However, we also have to
> assume that equipment would fall into "the wrong people's hands" at
> intervals, as happened with the Pueblo incident. If properly designed,
> the compromise of such equipment won't reveal communications, but
> there is no way to prevent it from revealing methods, which could then
> be exploited by an opponent to secure their own communications.
> 
> Does the tension between securing one's own communications and
> breaking an opponents communications sometimes drive the use of COMSEC
> gear that may be "too close to the edge" for comfort, for fear of
> revealing too much about more secure methods? If so, does the public
> revelation of Suite B mean that the NSA has decided it prefers to keep
> communications secure to breaking opposition communications?

Of historical interest on this question there is useful material in
"Between Silk and Cyanide" by Leo Marks.

Marks was responsible for ciphers used during WWII by SOE for
communications with agents in German occupied Europe.  He describes an
episode when he was visited by people from Bletchley Park who were
concerned that he was equipping agents with ciphers that (he deduced)
were too strong for Bletchley Park to attack if they should fall into
German hands and come into use by them.

It is understandable, particularly during the Battle of the Atlantic,
that UK priorities should have been to maintain the availability of
breaks into enemy traffic even at the risk of hazarding communications
with agents.  (If Britain had failed in the Atlantic the war in the west
would have been over.  If SOE failed, there were no short-term
consequences of similar seriousness.)

The preservation of secrecy about those breaks for nearly thirty years
after the end of the war suggests that those priorities may have become
ossified, which may in turn account for excessive governmental anxieties
over the spread of strong cryptography.  Any change in these priorities
would be of great interest.

Nicholas Bohm
-- 
Salkyns, Great Canfield, Takeley,
Bishop's Stortford CM22 6SX, UK

Phone   01279 871272(+44 1279 871272)
Fax  020 7788 2198   (+44 20 7788 2198)
Mobile  07715 419728(+44 7715 419728)

PGP public key ID: 0x899DD7FF.  Fingerprint:
5248 1320 B42E 84FC 1E8B  A9E6 0912 AE66 899D D7FF

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: the effects of a spy

2005-11-15 Thread Perry E. Metzger

"Steven M. Bellovin" <[EMAIL PROTECTED]> writes:
> Bruce Schneier's newsletter Cryptogram has the following fascinating 
> link: http://www.fas.org/irp/eprint/heath.pdf
> It's the story of effects of a single spy who betrayed keys and 
> encryptor designs.

Very interesting indeed. I was unaware that the military had such
astonishingly bad key management practices. One wonders if things have
actually improved.

One thing one hopes has changed is that one hopes that it is no longer
necessary for everyone to share the same keying material among so many
different endpoints. Public key cryptography and key negotiation could
(in theory) make it unnecessary to store shared secrets for long
periods of time before use, where they are rendered vulnerable to
espionage. One hopes that, over the last thirty years, this or
something analogous has been implemented.

One intriguing question that I was left with after reading the whole
thing was not mentioned in the document at all. One portion of the
NSA's role is to break other people's codes. However, we also have to
assume that equipment would fall into "the wrong people's hands" at
intervals, as happened with the Pueblo incident. If properly designed,
the compromise of such equipment won't reveal communications, but
there is no way to prevent it from revealing methods, which could then
be exploited by an opponent to secure their own communications.

Does the tension between securing one's own communications and
breaking an opponents communications sometimes drive the use of COMSEC
gear that may be "too close to the edge" for comfort, for fear of
revealing too much about more secure methods? If so, does the public
revelation of Suite B mean that the NSA has decided it prefers to keep
communications secure to breaking opposition communications?

Perry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]