On 10/09/10 3:37 AM, Marsh Ray wrote:
On 09/08/2010 06:35 PM, Ian G wrote:
As a final footnote; why is K2 so misused? Why does everyone believe
that Shannon's maxim means you must never use a secret algorithm?
I always figured there were a few reasons:
1. To ensure that the people designing, implementing, and operating the
overall system have no excuse with which to rationalize poor practices
in handling the key material.
Human nature and engineering pressures being what they are, it's highly
probable that at some point in the design process an engineer will have
in the back of his mind "it's just a minor issue and it's secret anyway
so no one will figure it out". Or operations staff may think "we can
just change the key, we needn't report its loss since the bad guys won't
have the hardware to use it."
In theory it would seem like obscurity doesn't hurt security. In
practice, it seems to be a negative pressure on quality.
Right, this is a very important reason. In my experience, when I've
come across someone saying "that's secret" I generally find out later
it's just a cover for some shameful rot like incompetence or missing bits.
So in general, in the Internet, we would say that systems should not be
secret, and for the net, this has been a big time win for us, as almost
all the general systems that were written in the open have defeated
their closed opponents. The whole open source thing.
However, when it comes to security, the record is not so good, and the
logic is more contorted. E.g., IPSec versus Skype. Open failure versus
closed success.
2. It reflects reality. Either the system is too inconsequential to
bother with or the reverse engineer wins in the end. Seriously, this
kind of secrecy has failed pretty much every time it has been tried,
even in hardware. E.g. the TPM chip: http://extendedsubset.com/?p=19
"Tarnovsky’s examination process involved subtle use of hardware-based
liquid chemical and gas technologies in a lab setting to probe with
specialized needles to build tungsten bridges."
Yes. So an important part of secrecy is, what happens when the secret
is breached? Does it cause overall collapse of the system, or is the
secret not really viral? Recall when DeCSS breached the DVD, it spread
across the whole system, whereas the crack of a GSM phone only cracks
that one phone. You have reproduce the crack every time, leaving tracks.
In one model, a secret can be used and benefit, in another it can result
in brittleness. Economics matters (in security we call economics "risk").
3. The security of the system components must depend only on those
things which can be well-defined, or else the security of the overall
system cannot be well-defined. The secrecy of an algorithm is
notoriously difficult to characterize. So this principle is as much
about compartmentalization as anything else, forcing a separation
between the parts that can be formally reasoned about and those which
cannot.
Right. E.g., the standard block cipher is a wonderful black box, but it
has a marketing downside: the reasoning certainty of the block cipher
is often applied to other cryptography without realising that it doesn't
crossover. E.g., people rationalise that SSL is secure .. which means
that secure browsing is secure ... which means that phishing is
impossible. Rationalising about one easy black box is seductively
carried on to another area with no easy rationalisation.
It discourages the old "and then a miracle occurs"-type reasoning from
being restated simply as "and the algorithm is secret".
http://star.psy.ohio-state.edu/coglab/Miracle.html
some business plans I've seen :)
I can think of four cases:
A. The secrecy of the algorithm is not a requirement for the secrecy of
the system.
B. The algorithm is secret, then it must remain secret and be handled as
a secret. Treating it as such is a lot of extra work. It's very
difficult to keep a secret, particularly around networked computers. A
requirement for algorithm secrecy greatly limits the possible
applications. Good luck if you ever need to revoke it.
C. There are multiple defined areas of trust in the system and people
trusted with the implementation are trusted differently than those with
the messages. How the implementation exchanges messages with those who
are trusted with them is a different problem, probably involving some
other key and around we go. This obviously introduces a lot of
complexity and probably isn't what is intended for most systems.
Yes, and complexity breeds insecurity. Attackers look for the gaps.
Plus it's a whole lot of work.
D. Security of the system isn't binary. Learning the algorithm is a
security break, but only in some well-defined and limited way. For
example, knowing the algorithm might let an attacker decrypt messages
but not forge new valid ones. This decomposes into modeling two
attackers, one who can know the algorithm and another who is presumed to
not know it. Each sub-case now being one of A, B, C. (Or even D. Yes,
why stop there? Perhaps you have an algorithm where the attacker may be
expected to learn only some parts of it.)
Defence in depth. Good security modellers will always do something like
"and what happens to the system if AES is completely cracked?" Good
systems survive. e.g., a payment system will often rely on the cipher
for privacy only, a digsig and one-time authorisations will ensure that
payments carry on, albeit with less privacy.
Also, algorithm agility can in theory replace the algorithms when
broken. However this is a double-edged sword. It involves a lot of
complexity (hence work and insecurity), and often doesn't work nearly as
well as you'd hope. An example of this is the re-negotiation break in
SSL. We are now in the presence of a roll-over of a broken protocol,
and we're at 1 year and counting. SSL renegotiation is instructive
because people really care. We can see the same game going on with "get
rid of SSL v2" which is around 5 years and counting .. but there most
people don't care.
In contrast, non-committee-based systems like Skype can probably roll
over in weeks or months.
So rather than stumbling into it, which of the above cases is
appropriate for our design?
Exactly. Every design has to meet a set of varying requirements. The
idea of drop-in crypto design is flawed, because it assumes that
everyone has the same set of requirements.
Let's rule out case D, that's just extra work. If you're prepared to
show the system is only insecure in a limited way to an attacker who
knows the algorithm, it's probably easier to just go ahead and deliver
full security in all valid cases.
Case C simply isn't called for by most requirements, introducing it on
its own would just be weird. Since it requires lots of extra
interactions with other parts of the overall system to deploy securely,
it won't be done properly if it's not a necessary part of the process.
You'll probably see a lot of that sort of C, D in the military and
national security fields. Because they can afford it?
In case B, anyone who has access to an implementation of the system can
break the system. Knowledge of the algorithm is effectively a global
master key. Why bother with a key at all then?
This typically leaves case A.
Perhaps because it is the cheapest solution, too :)
Can anyone help with pointers to particular cases?
Skype: still secret today ...
http://www.theregister.co.uk/2010/07/09/skype_crypto/
"Reverse engineer extracts Skype crypto secret recipe - VoIP service
mulls legal action"
Yes, one in a steady stream of minor breaks. Note that Skype uses RC4
only to encrypt the binary. So, this is a classic case of defence in
depth, in that now someone can read the binary. But skype is still
secure, according to its own security model.
GSM: cracked in 1998, didn't worry it at all.
If what you're saying is that nobody cares about GSM security, it's not
a relevant example.
Well, it is. Because what was cracked was not easily duplicatable by
many people. So the security held strong in the cases that mattered.
Netscape: 40 bit crypto crunched by a couple of bored students
in 1997?, didn't slow down the web one iota.
The broken RNG thing? SSL depends on a source of good random numbers but
it's a public algorithm.
Yeah, there were two crunches by Ian G (the other one), one was the
broken RNG and one was the 40 bit. What is instructive however is the
40 bit thing: even though Goldberg showed that it could be done, nobody
has ever done it in the wild.
Why not? And if not, why do we care?
Suite A: so secret, we don't even know if it exists...
Perhaps it exists without anyone at all knowing.
RC4: reverse engineered as ARC4, still in use,
by Skype for example!
Like Skype, more evidence to support the idea that algorithms can be
either secret or widely adopted, but not both.
Right, but in business terms, RC4 carried on to be a widely used
algorithm. The secrecy was good for something, but once cracked the
business arrangement didn't completely die away.
(*) Lucky Green extracted the algorithms from the GSM phone, took about
3 months of probing to extract all the bits out. Then, the same couple
of bored students as in the Netscape hack, Dave Warner and Ian Goldberg,
gave him a hand and cracked the algorithms "in a day" or so the media
said at the time... Technically, not all of algorithms were cracked, but
that's mostly irrelevant to the story.
(&) The designated enemy for the GSM phone was twofold: papparazi
listening to private calls (typically, secret affairs between notable
people), and time-stealing by cloning the phone. Both of these
disappeared completely with the GSM.
Last I saw, GSM security was defeatable with $1500 in hardware and an
open source boot CD. Attacks against the negotiated ciphers meant that
even transmissions intercepted in the past could be decrypted.
Right. And they haven't bothered to fix it. Which indicates that the
security is good enough for *their* security model. You're trying to
impose on them your security model. They don't care.
Who's right? Dunno, but I know who's making the money ;-)
iang
_______________________________________________
cryptography mailing list
[email protected]
http://lists.randombit.net/mailman/listinfo/cryptography