Brief summary of the discussion so far:
---------------------------------------
I made the claim that tunnelling, while being sometimes (but not
always) a fine tool, is an indication that the architecture has
shortcomings, and therefore has "failed" in some sense of the word.
[Apparently the word "failure" was too strong, but at least it
generated some discussion :-)]
A number of people replied that tunnels are a useful tool for
virtualising the address space, similar to virtual memory in von
Neuman architecture. One could claim it even to be similar to
virtual machines.
Clarifications on my side:
--------------------------
I am definitely not saying that tunnelling is bad and that we should
ban it; quite converse. As several people mentioned it is a fine
tool, often allowing much faster evolution and often leading to
easier operations.
What I am trying to do is to understand the pressures that lead to
the currently very common use of tunnelling (with its apparent more
technical problems). One way to express my feeling is the
following: I think that we should not need tunnelling; i.e., our
architecture should be powerful enough so that (most current) needs
of tunnelling would disappear.
In this message I am still trying to argue in favour of my position
that tunnelling is an indication of something missing, mainly
pointing out that there are alternatives to virtualisation, both in
computer and network architecture.
If we can go further and try to understand the alternatives and there
potential implications, that would be even better.
Going further:
--------------
I think comparing tunnelling to VM is a good analogy, and certainly
something that I missed in my original thinking. [Thanks to
everybody that pointed it out.] However, I also think that we are
not doing a very good job there.
I am framing my argumentation below as follows:
1. There are architectural alternatives to today's virtual memory.
2. Virtual machine is more than virtual memory.
3. None of our current tunnelling practises come even close to the
level of abstraction provided by either the virtual memory or virtual
machine abstractions.
4. Even if we may get a clean virtualisation architecture with IPv6,
we are not there yet, and may not be clear where, exactly, we should
be going.
Details:
1. Architectural alternatives to virtual memory
===============================================
If we think about today's virtual memory systems it consist of two
apparent mechanisms that seem to be orthogonal:
a. Mapping virtual addresses to (different) real addresses
b. Protection bits, providing lots of functionality
If we consider address mapping first, there are certainly
architectural alternatives. For example, if you have a large enough
address space, you could use position independent code. With proper
support at the processor level that can be done as convenient as
virtual-memory-based mapping. The amount of hardware needed is
basically the same; the difference is on the abstraction.
I am not going to the protection part. It is apparently a very
useful abstraction, providing lots of useful functionality such as
copy-on-write, flush-on-dirty, etc.
The point is that even a good virtualisation abstraction seems to
consist of several components, some of which don't _necessarily_
imply others.
2. Virtual machine is more than virtual memory
==============================================
The majority of today's hardware does not support full virtual
machine functionality, i.e., virtualising also the protected mode.
In the main stream architectures it has been appearing only
recently. Consequently, many of today's virtual machine products
either use software to "cheat", resulting in architecturally ugly
(but useful!) solutions, or are fully implemented in software, with
the apparent performance penalty.
My point here is full virtualisation, such as in the case of virtual
machine, requires much more than just virtualising the address space.
3. Our current tunnelling practises are not very clean
======================================================
Returning now to the current IP-over-IP tunnelling practises (which
seem to be closest to virtualisation), I claim we could do much better.
The first issue, which I already touched, relates to muddled address
space semantics. While one could argue that RFC 1918 addresses are
like the virtual address space in a virtual memory system, that
AFAICT doesn't quite hold. The difference lies in how you
communicate with components outside of that virtual space.
Having a very large address space with no overlapping addresses seems
to lead to a cleaner architecture than mapping address spaces. For
us, going to IPv6, Unique Local IPv6 addresses (draft-ietf-ipv6-
unique-local-addr-XX.txt; sometimes called Hinden-Haberman addresses)
may help.
If you go there and get rid of RFC 1918 addresses, returning to
addresses that are globally unique, what remains is the question of
routing. For practical reasons, it looks like that people may need
to use tunnelling even for Unique Local IPv6 addresses. The reason
for this is that the routing system does not easily allow them to be
globally routable, which, in turn, leads to another architectural
question:
Should the routing system better support local-only routes?
[There are lots of potential answers to both the narrower practical
question and to the larger architectural one; e.g. use MPLS, use
source routing, etc., but that is not the point of this message. The
point here is that the fact that tunnelling is needed may, again, be
an indication that some functionality is missing elsewhere, in this
case in routing.]
A second (perhaps converse) issue here relates to layering. Some
people expressed their belief that IP is such a useful service that
building virtual topologies over an existing one is beneficial.
Certainly so, but my question is now much of this is caused by
current architecture, i.e., the fact that IP addresses are
simultaneously used as identifiers and locators. If these two roles
were separated (most probably leading to two having (sub)layers
instead of the currently one IP layer), what would that virtual
topology bring?
My claim is that if we did the split, there would be much less need
for "managing" the IP address space, leading to a situation where
most (if not all) long term needs for IP virtualisation would disappear.
4. Full virtualisation for IPv6
===============================
A major question now appears to be what we should do about IPv6 and
tunnelling? Based on the discussion to far it appears to me that it
may be a good idea to try to figure out a full virtualisation
architecture for IPv6, i.e, one resembling virtual machines (instead
of virtual memory). That leads to a number of more practical questions:
- Who would need it?
- What would be the requirements that it would fulfil?
- e.g. what do people need topology abstraction for?
- what level of emulation are we expecting
- What is missing (if anything) in the current solutions we have?
Maybe all we need is a BCP explaining how to use existing components
in the best possible way for virtualising IPv6 in a local network,
with appropriate barriers between the local and outside world?
But before that I'd like to here what are the real benefits that
people would expect to get from it, as I am personally having hard
time seeing them.
--Pekka
_______________________________________________
Int-area mailing list
[email protected]
https://www1.ietf.org/mailman/listinfo/int-area