On 09/08/2010 06:35 PM, Ian G wrote:

As a final footnote; why is K2 so misused? Why does everyone believe
that Shannon's maxim means you must never use a secret algorithm?

I always figured there were a few reasons:

1. To ensure that the people designing, implementing, and operating the overall system have no excuse with which to rationalize poor practices in handling the key material.

Human nature and engineering pressures being what they are, it's highly probable that at some point in the design process an engineer will have in the back of his mind "it's just a minor issue and it's secret anyway so no one will figure it out". Or operations staff may think "we can just change the key, we needn't report its loss since the bad guys won't have the hardware to use it."

In theory it would seem like obscurity doesn't hurt security. In practice, it seems to be a negative pressure on quality.

2. It reflects reality. Either the system is too inconsequential to bother with or the reverse engineer wins in the end. Seriously, this kind of secrecy has failed pretty much every time it has been tried, even in hardware. E.g. the TPM chip: http://extendedsubset.com/?p=19 "Tarnovsky’s examination process involved subtle use of hardware-based liquid chemical and gas technologies in a lab setting to probe with specialized needles to build tungsten bridges."

3. The security of the system components must depend only on those things which can be well-defined, or else the security of the overall system cannot be well-defined. The secrecy of an algorithm is notoriously difficult to characterize. So this principle is as much about compartmentalization as anything else, forcing a separation between the parts that can be formally reasoned about and those which cannot.

It discourages the old "and then a miracle occurs"-type reasoning from being restated simply as "and the algorithm is secret".
http://star.psy.ohio-state.edu/coglab/Miracle.html

I can think of four cases:

A. The secrecy of the algorithm is not a requirement for the secrecy of the system.

B. The algorithm is secret, then it must remain secret and be handled as a secret. Treating it as such is a lot of extra work. It's very difficult to keep a secret, particularly around networked computers. A requirement for algorithm secrecy greatly limits the possible applications. Good luck if you ever need to revoke it.

C. There are multiple defined areas of trust in the system and people trusted with the implementation are trusted differently than those with the messages. How the implementation exchanges messages with those who are trusted with them is a different problem, probably involving some other key and around we go. This obviously introduces a lot of complexity and probably isn't what is intended for most systems.

D. Security of the system isn't binary. Learning the algorithm is a security break, but only in some well-defined and limited way. For example, knowing the algorithm might let an attacker decrypt messages but not forge new valid ones. This decomposes into modeling two attackers, one who can know the algorithm and another who is presumed to not know it. Each sub-case now being one of A, B, C. (Or even D. Yes, why stop there? Perhaps you have an algorithm where the attacker may be expected to learn only some parts of it.)

So rather than stumbling into it, which of the above cases is appropriate for our design?

Let's rule out case D, that's just extra work. If you're prepared to show the system is only insecure in a limited way to an attacker who knows the algorithm, it's probably easier to just go ahead and deliver full security in all valid cases.

Case C simply isn't called for by most requirements, introducing it on its own would just be weird. Since it requires lots of extra interactions with other parts of the overall system to deploy securely, it won't be done properly if it's not a necessary part of the process.

In case B, anyone who has access to an implementation of the system can break the system. Knowledge of the algorithm is effectively a global master key. Why bother with a key at all then?

This typically leaves case A.

Can anyone help with pointers to particular cases?

Skype: still secret today ...
http://www.theregister.co.uk/2010/07/09/skype_crypto/
"Reverse engineer extracts Skype crypto secret recipe - VoIP service mulls legal action"

GSM: cracked in 1998, didn't worry it at all.
If what you're saying is that nobody cares about GSM security, it's not a relevant example.

Netscape: 40 bit crypto crunched by a couple of bored students
in 1997?, didn't slow down the web one iota.
The broken RNG thing? SSL depends on a source of good random numbers but it's a public algorithm.

Suite A: so secret, we don't even know if it exists...
Perhaps it exists without anyone at all knowing.

RC4: reverse engineered as ARC4, still in use,
by Skype for example!

Like Skype, more evidence to support the idea that algorithms can be either secret or widely adopted, but not both.

(*) Lucky Green extracted the algorithms from the GSM phone, took about
3 months of probing to extract all the bits out. Then, the same couple
of bored students as in the Netscape hack, Dave Warner and Ian Goldberg,
gave him a hand and cracked the algorithms "in a day" or so the media
said at the time... Technically, not all of algorithms were cracked, but
that's mostly irrelevant to the story.

(&) The designated enemy for the GSM phone was twofold: papparazi
listening to private calls (typically, secret affairs between notable
people), and time-stealing by cloning the phone. Both of these
disappeared completely with the GSM.

Last I saw, GSM security was defeatable with $1500 in hardware and an open source boot CD. Attacks against the negotiated ciphers meant that even transmissions intercepted in the past could be decrypted.

- Marsh
_______________________________________________
cryptography mailing list
[email protected]
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to