N. Coesel wrote:
At 15:54 18-11-2009 +0000, you wrote:
On 2009-11-18, David Brown <[email protected]> wrote:
N. Coesel wrote:
The old "security by obscurity" trick, that has /such/ a good reputation?
Any means of security is security by obscurity by definition.
All protections schemes come down to hiding a secret
(obscurity). Whether its a key, a secret algorithm, etc.
The phrase "security by obscurity" is normally taken to mean
"security by hiding the way it works", i.e., trying to hide
the code or algorithm.
Exactly. "Security by obscurity" does not refer to the fact that you need to keep a secret key a secret. It refers
specifically to the dependance on keeping the design and
implementation of the _algorithms_ a secret.

Quoting Paul Schneier in _Secrets_&_Lies_:

A good security design has no secrets in its details. In other words, all of the security is in the product itself
     and its chageable secret: the cryptographic keys, the
     passwords, the tokens and so forth.  The antithesis is
_security_by_obscurity_: The details of the system are part of the security. If a system is designed with
     security by obscurity then that security is delicate.

Later in the same book:
     Again and again in this book I rail against _security_by_
     _obscurity_: proprietary cryptography, closed source code,
     secret operating systems.

Security by obscurity doesn't work.

Which is big a misconception!


Security by obscurity works until someone is bothered to break that particular system, and then it is totally trashed.

Part of my job involves assessing and implementing electronic security
measures. I did a lot of thinking and reading about what security by
obscurity actually *means* and came to the following conclusions (I'll use
a lock and a key as metaphores):

- If you make the way the lock works publicly available then you need a
complex lock and a big key. This security measure relies on the key staying
hidden and that it will take a long time to pick the lock.


The key must be big, but the lock doesn't have to be complex. A more complex lock will let you use a smaller key for the same security, however. Either way, you can pick a lock that many others have tried and tested, and you can be confident is a good and safe lock. And obviously you need to keep your key safe.

- If you keep the way the lock works secret, you can keep the lock and the
key very simple. This security measure relies on keeping both the lock and
the key secret. Trying to pick the lock is an almost impossible task.


The bad guys will have the lock (if they don't, there is nothing to break in to!). So they will be able to dismantle the lock and see how it works. If it is complex, it will take some time and effort - but you can be 100% confident that there are people with the tools, knowledge, time and money to decipher any lock you choose to make if they think it is economically advantageous. It is not expensive to find someone with the equipment to saw the top off a chip and read out flash and ram contents using electron microscopes and microscopic probes.

Then there is the traditional vector of attack - bribery, burglary, and bribery. Find someone who knows how the lock works, and "persuade" them to reveal the workings.

Once the workings of the lock are revealed, every device ever made using that lock is compromised, since the keys are short and simple.

Both methods require keeping a secret. So both methods rely on obscurity
anyway!


The lock is /never/ secret - the bad guys have it in their hands. At most, the way the lock works is obfusticated.

The key is individual to each system, while the lock is the same on them all. So if you rely on the lock being secret, once it is broken /everybody/ is in trouble. To protect everyone else in case this happens, you have to make sure the keys are strong enough that other systems will be safe even if the lock is broken. And if that is the case, what exactly is the point in keeping the lock secret? It raises the cost of the first crack somewhat, but you lose all the advantages of being able to re-use known and trusted lock designs.

So where is the real problem? Well, if you have to make the way the lock
works public which can be required for code reviews, share source code or
to gain trust then you can't get away with small keys. The downside is that
the lock is complicated so it needs a lot of processing power. The lock is
also prone to biased brute force attacks. The upside is that because a lot
of people can look at the lock, it can be improved by the community.


Locks don't have to be complicated. A nice simple xor key is a perfectly good encryption which is about as simple as could be imagined - it just needs a fairly long key (depending on the amount of data to be encrypted, and how hard you want it to be). There is a huge range from such xor checks through pseudo-random number generators, through DES, 3DES, AES, to advanced stuff like elliptical codes. They offer a balance between code complexity, run-time size and speed, and key size for a given level of security. You pick what suits your needs.

There is good reason that anybody who /really/ knows about this stuff, and really wants to protect something, uses standard known encryption algorithms.

Smart cards like MiFare typically use a secret lock. Is this bad? No, given
the amount of processing power available it is the best choice! In order to
hack the Mifare chip they had to examine the chip itself. It wasn't hacked
by using brute force.

http://www.computerworld.com/s/article/9069558/How_they_hacked_it_The_MiFare
_RFID_crack_explained


Successful smart cards use known encryption algorithms. Have you actually looked at the MiFare? There is a good summary at <http://en.wikipedia.org/wiki/MIFARE>. It is clear that the so-called "secret lock" encryption can be broken within seconds, while all other MiFare cards are made using AES, DES, 3DES, etc.

I can think of few better examples showing why "security by obscurity" (defined in the way the rest of the world defines it, i.e., trying to keep the "lock" secret) does not work.

The bottom line is: keeping the lock secret or just the key depends
entirely on the requirements. There is no general rule of thumb.


There /is/ a rule of thumb, and it is somewhat scary that you don't realise this. I don't want to tell anyone how to do their job, although I understand it might sound like that. But take a long hard look at MiFare and you cannot fail to understand why the "secret lock" philosophy failed so badly, and why all newer MiFare cards use standard algorithms.

The only exception to this rule of thumb is when you are designing a single system and the key is fixed in each item made. Then it doesn't matter whether the key is long or the code is complex, since they are kept in the same place, and breaking one means breaking the other.

When it comes to obfustigating code: it can be helpfull to discourage
people. I recently had to asses a security measure for upgrading firmware
options using a license key. I used a debugger and a decompiler. The text
strings containing the error messages linked together with the code served
very well as comments on what the code did. However, the part that
converted the license key together with the serial number into a go/no-go
condition was spread over several functions. Although the decompiler showed
some insight in how the program was structured, determining the algorithm
wasn't easy because of the many function calls. I never bothered to
determine the algorithm though. Just skipping the check for the key made
the program enable the next upgrade in the list of possible (cumulative)
upgrades.


Obfusticated code certainly makes it harder to understand it. But if you are protecting anything valuable with these codes, you will not stop professional crackers no matter what you do. There are people (mostly in ex-Soviet countries) with far more experience than you or I in this field, and who cost a tenth of the hourly rate. They /will/ determine the algorithm (unless they too can find a shortcut). Spreading the code around with extra function calls is baby steps in obfustication - a professional cracker would see it as nothing more than poor program structure. Letting the compiler do its best optimisation and code inlining will make harder code. There are methods for /real/ obfustication that post-process generated assembly code and produce something worthy of the name "obfusticated code" - not that it helps in reality.


Reply via email to