At 15:54 18-11-2009 +0000, you wrote:
>On 2009-11-18, David Brown <[email protected]> wrote:
>> N. Coesel wrote:
>
>>>> The old "security by obscurity" trick, that has /such/ a good reputation?
>>> 
>>> Any means of security is security by obscurity by definition.
>>> All protections schemes come down to hiding a secret
>>> (obscurity). Whether its a key, a secret algorithm, etc.
>>
>> The phrase "security by obscurity" is normally taken to mean
>> "security by hiding the way it works", i.e., trying to hide
>> the code or algorithm.
>
>Exactly.  "Security by obscurity" does not refer to the fact 
>that you need to keep a secret key a secret.  It refers
>specifically to the dependance on keeping the design and
>implementation of the _algorithms_ a secret.
>
>Quoting Paul Schneier in _Secrets_&_Lies_:
>
>      A good security design has no secrets in its details.  In 
>      other words, all of the security is in the product itself
>      and its chageable secret: the cryptographic keys, the
>      passwords, the tokens and so forth.  The antithesis is
>      _security_by_obscurity_: The details of the system are 
>      part of the security.  If a system is designed with
>      security by obscurity then that security is delicate.
>
>Later in the same book:      
>
>      Again and again in this book I rail against _security_by_
>      _obscurity_: proprietary cryptography, closed source code,
>      secret operating systems.
>
>      
>Security by obscurity doesn't work.

Which is big a misconception!

Part of my job involves assessing and implementing electronic security
measures. I did a lot of thinking and reading about what security by
obscurity actually *means* and came to the following conclusions (I'll use
a lock and a key as metaphores):

- If you make the way the lock works publicly available then you need a
complex lock and a big key. This security measure relies on the key staying
hidden and that it will take a long time to pick the lock.

- If you keep the way the lock works secret, you can keep the lock and the
key very simple. This security measure relies on keeping both the lock and
the key secret. Trying to pick the lock is an almost impossible task.

Both methods require keeping a secret. So both methods rely on obscurity
anyway!

So where is the real problem? Well, if you have to make the way the lock
works public which can be required for code reviews, share source code or
to gain trust then you can't get away with small keys. The downside is that
the lock is complicated so it needs a lot of processing power. The lock is
also prone to biased brute force attacks. The upside is that because a lot
of people can look at the lock, it can be improved by the community.

Smart cards like MiFare typically use a secret lock. Is this bad? No, given
the amount of processing power available it is the best choice! In order to
hack the Mifare chip they had to examine the chip itself. It wasn't hacked
by using brute force.

http://www.computerworld.com/s/article/9069558/How_they_hacked_it_The_MiFare
_RFID_crack_explained

The bottom line is: keeping the lock secret or just the key depends
entirely on the requirements. There is no general rule of thumb.

When it comes to obfustigating code: it can be helpfull to discourage
people. I recently had to asses a security measure for upgrading firmware
options using a license key. I used a debugger and a decompiler. The text
strings containing the error messages linked together with the code served
very well as comments on what the code did. However, the part that
converted the license key together with the serial number into a go/no-go
condition was spread over several functions. Although the decompiler showed
some insight in how the program was structured, determining the algorithm
wasn't easy because of the many function calls. I never bothered to
determine the algorithm though. Just skipping the check for the key made
the program enable the next upgrade in the list of possible (cumulative)
upgrades.

Nico Coesel


Reply via email to