Hi, all, here's the note on a security 'philosophy' for LISP.
This is a bit rough and incomplete, for which I apologize in advance: it's
clipped out of a much longer document. That document included a lot more
actual detail on how this approach would work when used to create security
mechanisms for one part of LISP (it applied the philosophy laid out here to
securing DDT), which made it a little easier to grasp some of the
(admittedly) very high-level things it talks about, with a concrete
instantiation in hand.
And now that I look at the 'Architectural Perspective' document, most of this
is already in there (albeit in slightly re-arranged form). Sigh!
Still, it might be useful for this chunk to be pulled out separately, for the
WG to consider, to see if everyone is OK with these general guidelines for
security for LISP.
Noel
----
Notes on security for LISP
Architectural Setting
There are a small number of key principles that will guide the design:
- Design lifetime
- Threat level
How long is the design intended to last? If LISP succeeds, I would guess a
minimum of a 50-year lifetime (IPv4 is now 34, and will be around for at
least a decade yet, if not longer; DNS is 28, and will probably last
indefinitely).
How serious are the threats it needs to meet? Well... The _good_ thing about
the Internet is that it brings the world to your doorstep - masses of
information from around the world are instantly available on your display.
The _bad_ thing about the Internet is that it brings the world to your
doorstep - legions of crackers, thieves, and general scum and villainy. Their
sophistication level is rising all the time: as the easier holes are plugged,
they go after others. This will inevitably eventually require the most
powerful security mechanisms available to counteract their attacks.
An illustration from another field may be useful here. The kind of complex
and powerful security available in multi-level secure time-sharing systems
like Multics appeared at one point to be a relic of another age, not useful
in the age of personal machines. Complex and difficult, they appeared to be
an expensive dead end. However, when active content first started to appear,
it became apparent that allowing active content from the network was allowing
almost anyone to share the computer with you - exactly the situation the
security of multi-level secure time-sharing systems was designed to handle.
Which is not to say that LISP needs to be maximally secure _right away_. The
threat will develop and grow over a long time period. However, the basic
design has to be capable of being _securable_ to the expanded degree that
will eventually be necessary. _Eventually_ it will need to be as securable
as, say, DNS - i.e. it _can_ be secured to the same level, although people
may chose not to secure their branches as well as, say, the DNS root does.
At the same time, two pitfalls must be avoided.
First, if we make the security too complex, and always mandatory, it will
dissuade people from implementing LISP - or make it much harder, if they do -
because they won't be able to run LISP without building all sorts of complex
security stuff (which, after all, does not add any _functionality_).
Second, we need to pay a lot of attention to usabililty, particularly in the
area of configuration. A lot of experience has shown that when high-grade
security is 'too much hassle', people wind up not using it, because it's too
much of a PITA to configure it, etc, etc.
Design Approach
While LISP has to have an 'ultimate' security mechanism which is as good as
that of, say, DNS, such a system can be complex to implement and configure.
However, we can attempt to make LISP have an 'easier' mode, one which will be
both easier to configure, and less work to implement. This can be both an
interim system (with the full powered system available for when it it
needed), as well as the system used on branches where security is less
critical.
The challenge is to do this in a way that does not make the design yet more
complex again (since it has to include both the 'full strength' mechanism and
the 'easier to configure' mechanism). This is the fundamental tradeoff we
struggle with: we can provide 'easier to configure' options, but that may
make the overall design and implementation more complex.
It's not that that a minimal design is insecure, or the basic approach is
wrong - it's not. It's more of a question of some operational convenience/etc
issues - e.g. 'how easy will it be to recover from a cryptosystem compromise'.
E.g. if we have two ways to recover from a security compromise, one which is
mostly manual and a lot of work, and another which is more automated but
makes the protocol more complicated, if compromises really are very rare,
maybe the smart call _is_ to go with the manual thing - as long as we have
looked carefully at both options, and understood in some detail the costs and
benefits of each.
As far as making it hard to implement to begin with: we can make it 'easy' to
deploy initially by simply not implementing/configuring the heavy-duty
security early on.
Provided, of course, that the packet formats, etc, are all included in the
design to begin with, but that should not be a problem. We can even put off
issuing detailed specifications for some complex operations, like automatic
updates for root-key rollover, until later in the process - provided all the
tools that will need, in terms of packet formats, etc are already there.)
I know this is starting to sound like a lot of stuff, but, we need really
good security in LISP - if it becomes a key part of the network
infrastructure, people are just going to demand that it have securability
_and_ ease of use equal to (or better than) that of DNS.
It's good to be really very sensitive to making LISP security _too_
complicated without really good reason. But better we do a really good job on
it ourselves, where we can keep to the goal of 'as simple as possible', than
have someone else land us with a complicated ball of yarn...
Saying to the IETF/customers 'security compromises are not a real issue' is
not going to fly well - but saying 'we looked at the following mechanism to
deal in a semi-automated way with compromises {give detail so it's clear we
really did look at it in detail} and we decided the complexity cost/benefit
is not high enough to make it worth adding, versus this more manual procedure
{again, details}' is much more likely to be OK with them.
That shows we've done our due diligence, and it's not just 'oh, we can't be
bothered' or 'this doesn't fit with our ideology'.
Design Principles
Security in LISP faces many of the same challenges as security for other
parts of the Internet: good security usually means work for the users, but
without good security, things are vulnerable. The Internet has seen many very
secure systems devised, only to see them fail to reach wide adoption; the
reasons for that are complex, and vary, but being too much work to use is a
common thread.
In addition, at this stage of LISP's development/deployment, building in
the eventual maximum security mechanisms presents a large (perhaps too large)
barrier to implementation and deployment.
It is for all these reasons that LISP attempts to provide 'just enough'
security.
A couple of principles can guide us in the design.
- Prior experience
DNSSEC offers a lot of lessons - its capabilities have been identified as a
result of actual operational experience. The current architecture is the
result of a decade of field experience, and changes based on that experience
to make it easier to configure, maintain, etc: Most (all?) of the changes
since 1997. are not because of protocol/security flaws, but because actual
experience showed the system to be operationally 'non-optimal'.
Another very useful source comes from the advice of people with professional
experience in secure systems; typically the military, which has long and deep
experience with the operation of large-scale secure systems. Their experience
in particular should be given due weight, as they have long experience in
operating widespread systems with "ordinary" personnel.
In particular, it should be noted that historically many systems have been
broken into, not through a weakness in the algorithms, etc, but because of
poor operational mechanics. (The well-known 'Ultra' breakins of the Allies
were mostly due to failures in operational procedure.) So operational
capabilities intended to reduce the chance of human operational failure are
just as important as strong algorithms; making things operationally robust is
a key part of 'real' security.
- Complexity
Complexity is bad for several reasons, and should always be reduced to a
minimum. There are three kinds of complexity cost: protocol complexity,
implementation complexity, and configuration complexity. We can further
subdivide protocol complexity into packet format complexity, and algorithm
complexity. (There is some overlap of algorithm complexity, and
implementation complexity.)
We can, within some limits, trade off one kind of complexity for others: e.g.
we can provide configuration _options_ which are simpler, at the cost of
making the protocol and implementation complexity greater. And we can make
initial (less capable) implementations simpler if we make the protocols
slightly more complex (so that early implementations don't have to implement
all the features of the full-blown protocol).
_______________________________________________
lisp mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/lisp