Here's a note I sent to PGN and Farber's respective lists that might be
of interest to those here.  I think the issues go well beyond my attack
against locks, reflecting a deep cultural clash that perhaps goes a long
way toward explaining things like the DMCA.


Keep it secret, stupid!

Last year, I started wondering whether cryptologic approaches might be
useful for the analysis of things that don't use computers.
Mechanical locks seemed like a natural place to start, since they
provided many of the metaphors we used to think about computer
security in the first place.

So I read everything I could get my hands on about locks, which
included most of the available open literature and at least some of
the "closed" literature of that field.  Once I understood the basics,
I quickly discovered, or more accurately re-discovered, a simple and
practical rights amplification (or privilege escalation) attack to
which most master-keyed locks are vulnerable.  The attack uses access
to a single lock and key to get the master key to the entire system,
and is very easy to perform.  For details, see

I wrote up the attack, in a paper aimed more at convincing computer
scientists that locks are worth our attention than anything else (I
called it "Rights amplification in master-keyed mechanical locks").
As I pointed out in the paper, surely I could not have been the first
to discover this -- locksmiths, criminals, and college students must
have figured this out long ago.  Indeed, several colleagues mentioned
that my paper reminded them of their college days.  There is
considerable evidence that similar methods for master key decoding
have been discovered and rediscovered over the years, used illicitly
and passed along as folklore (several people have unearthed Internet
postings dating back as much as 15 years describing how to make master
keys).  Curious college students -- and professional burglars -- have
long been able to get their hands on master keys to the places that
interest them.

But the method does not seem to appear in the literature of locks and
security, and certainly users of master keyed locks did not seem to
know about this risk.  I submitted the paper to a journal and
circulated it to colleagues in the security community.  Eventually,
the paper reached the attention of a reporter at the New York Times,
who wrote it up in a story on the front page of the business section
last week.

The response surprised me.  For a few days, my e-mail inbox was full
of angry letters from locksmiths, the majority of which made both the
point that I'm a moron, because everyone knew about this already, as
well as the point that I'm irresponsible, because this method is much
too dangerous to publish.  A few managed to also work in a third
point, which is that the method couldn't possibly work because
obviously I'm just some egghead who doesn't know anything about locks.

Those letters, with their self-canceling inconsistency, are easy
enough to brush aside, but there seems to be a more serious problem
here, one that has led to a significant real-world vulnerability for
lock users but that is sadly all too familiar to contemporary
observers of computer security.

The existence of this method, and the reaction of the locksmithing
profession to it, strikes me as a classic instance of the complete
failure of the "keep vulnerabilities secret" security model.  I'm told
that the industry has known about this vulnerability and chosen to do
nothing -- not even warn their customers -- for over a century.
Instead it was kept secret and passed along as folklore, sometimes
used as a shortcut for recovering lost master keys for paying
customers.  If at some point in the last hundred years this method had
been documented properly, surely the threat could have been addressed
and lock customers allowed to make informed decisions about their own

The tragic part is that there are alternatives.  There are several
lock designs that turn out to resist this threat, including master
rings and bicentric locks.  While these designs aren't perfect, they
resist completely the adaptive oracle attack described in my paper.
It's a pity that stronger alternative designs have been allowed to die
a quiet death in the marketplace while customers, ignorant of the
risks, have spent over a hundred years investing in inferior systems.

Although a few people have confused my reporting of the vulnerability
with causing the vulnerability itself, I can take comfort in a story
that Richard Feynman famously told about his days on the Manhattan
project.  Some simple vulnerabilities (and user interface problems)
made it easy to open most of the safes in use at Los Alamos.  He
eventually demonstrated the problem to the Army officials in charge.
Horrified, they promised to do something about it.  The response?  A
memo ordering the staff to keep Feynman away from their safes.

Matt Blaze
26 January 2003

The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

Reply via email to