Re: where the indirection layer belongs

2003-09-08 Thread Robert Honore
Dear Masataka Ohta,

 I was a bit impetuous in saying that I would prefer not to modify 
libraries or implementations etc.  However, my aim so far is more to 
obtain for myself a clear understanding of the problem we are trying to 
solve, rather than trying to state requirements of the possible 
solutions to that problem.  My apologies for such a stupid statement.

I do believe that there are issues to be resolved and they are being 
currently reflected in threads like the one about deprecating site-local 
addresses and the need for provider independent addresses that is 
currently active, as well as the thread solving the real problem 
started by Tony Hain.  I believe the issues being discussed there are 
real.  You might not agree that they are.

If the issues are real, then we need to specify or formulate them in a 
way that is amenable to deriving solutions for them.  If the issues are 
not real, then they are likely the artifacts of wrong perceptions or 
ideas that users and engineers in the IPv6 community have.  In that 
case, we need to replace those wrong perceptions or ideas with correct 
ones.  It seems to me though, that nobody has stated clearly what those 
wrong perceptions and ideas are, much less to say what is wrong with 
them and thus replace them with correct perceptions and ideas.

Yours sincerely,
Robert Honore.
Masataka Ohta wrote:
Robert Honore;


I would also prefer not to modify any of the 
libraries or implementations of those protocols, lest we break something.


It is a wrong requirement wrong in various ways.

First of all, we must change something, which means we must change
some code.
Though the changes should better be simple, whether the changes can
be inserted as a module or not is irrelevant,
To make the changes simpler, the modification MUST be confined in
libraries or kernel. So, we should try to change libraries or kernel
to avoid changes on application code.
It, of course, is necessary to change application code for UDP
cases for address selection that it is necessary to an API
to control address selection in IP headers from applications
that IP and transport layer code MUST be modified .
Secondly, preservation of code is less important to preservation
of protocol and protocol is defined on the wire.
If you put some code below transport layer to modify packet content
visible from the transport layer, which is the case with MAST, you
change the transport layer protocol.
But, again, we must change something of the protocol.

So, the requirements are:

	to modify existing code

	to modify existing protocol

and any attempt, including but not limited to MAST, to avoid them
is not constructive only to constraing the solution space to
result in less capable and less efficient solutions.
Of course, it is desirable

	to preserve existing application code

whenever possible. But, we must be aware that it is NOT always
possible.
In addition, it is desirable

	to preserve interoperability with existing protocols

which automatically means to preserve interoperability with existing
code of the peer. In this case, it is doable. Of course, all the
freedom to use multiple addresses is lost when the peer is a
leagacy one.
			Masataka Ohta






Re: where the indirection layer belongs

2003-09-08 Thread Masataka Ohta
Robert Honore;

 It seems to me though, that nobody has stated clearly what those 
 wrong perceptions and ideas are, much less to say what is wrong with 
 them and thus replace them with correct perceptions and ideas.

A draft ID

Simple Internet Protocol, Again

on the real problems can be found at

ftp://ftp.hpcl.titech.ac.jp/draft-ohta-sipa--1.txt

Abstract

   IPv6 has failed to be deployed partly because of its complexity.

   IPv6 mostly is a descendant of SIP (Simple Internet Protocol) but has
   fatally bloated merging with other proposals and trying to make IPv6
   has better functionality than IPv4, which makes IPv6 unnecessarily
   complex and, thus, worse than IPv4.

   SIPA (Simple Internet Protocol, Again) is a proposal attempting to
   restore simplicity to IPv6 sticking to the real world operational
   requirements for the IPv4 Internet to make IPv6 more acceptable to
   ISPs and end users.

Masataka Ohta



A roadmap for end-point identifiers? (was Re: where the indirection layer belongs)

2003-09-08 Thread Pekka Nikander
[Please direct replies either to the IPv6 or the IETF
mailing lists, but not both.  The default should be IPv6,
imho.]
Pekka Nikander wrote:
 Now, even though I believe that we should solve the problems (and
 apparently believe that there are sensible solutions), achieving
 consensus on solutions that require architectural change may take too
 long.  Hence, I also believe that we need some kind of a road map,
 with some temporary or intermediate solutions along the way to a
 more long-standing set of solutions.
Robert Honore replied:
 I agree, and your statement corresponds to what Keith Moore says about
 the solutions fitting into a framework that is yet to be specified.  I
 believe specification of that framework begins with our defining
 what an end-point is.
Good that we agree on a need for a roadmap.  Now, I want to return
back to your original problem analysis:
 *Stable (or reliable) end-point identifiers
 *Resiliency of application (protocol) in the face of sudden
  IP address changes
 *Self-organised networks
These are the goals that we need to focus on.  While designing
the longer term architectural solutions, we need to preserve
the current functionality, and think about transition mechanisms.
From this point of view, the only (semi-)stable end-point
identifiers we have today are IP addresses.  We both agree, and
I think quite a few others agree, that IP addresses are not very
good end-point identifiers.  However, they are used as such today.
Furthermore, it will take quite a long time to get something to replace
the IP addresses as end-point identifiers.  As has been discussed
several times, domain names do not work well enough, and therefore
we need a new name space, I think.
Consequently, we have to provide (semi-)stable IP addresses
for IPv6 networks.  Based on the recent discussion at the IPv6
WG, apparently people think that PA addresses are not stable
enough.  Hence, at least to me, the Hinden/Haberman addresses
look like a good temporary solution.  It seems to provide stable
IP addresses, which can temporarily be used as end-point identifiers,
with the expectation that they will be eventually replaced with
proper end-point identifiers.
What comes to application resiliency, Christian Huitema's
approach of (mis)using Mobile IP may work well enough for a while.
However, it has a number of architectural problems that make
me to think about it only as a temporary solution.  Going further,
if we did not have any other reasons for proper end-point
identifiers, I think that Dave Croker's MAST might be good next
step.  However, since I do think that we most probably do need
stable and secure end-point identifiers, I think that something
like HIP will be more appropriate.
[I'm relatively ignorant to the exact details of self-organized
 networks, and therefore I don't want to comment that.]
Given the above, I think we could have a roadmap that might
look something like the following:
 Stable identifiers:Hinden/Haberman - New name space
   for end-point
 Resiliency on  Huitema MIPv6  -- (MAST) --- identifiers
 address changes:   multi-homing   (maybe HIP)
--Pekka Nikander





Re: where the indirection layer belongs

2003-09-05 Thread Robert Honore
Dear Pekka Nikander,

Please forgive the late reply.

Where can I find out more about Dave Crocker's MAST?

See the rest of my message embedded among the qoutation of your text.

Pekka Nikander wrote:
Robert,

I like your analysis very much.  Thank you for writing it up.

However, I also see a kind of causality here:  it looks to me that 
stable end-point identifiers are mainly needed to make applications 
survive IP address changes.  Dave Crocker's MAST is a good example 
how you can do that without having stable end-point identifiers.
There is more to stable end-point identifiers than just the ability to
make an application resilient.  I will argue that IP addresses *do not*
identify end-points, but node interfaces.  That is, they only specify
objects on the path to the end-points.  Applications, users and system
administrators need to specify more than node interfaces  They try to do
so by twisting IP addresses to all kinds of usages for which they were
not intended and are ill-suited in my opinion.  Indeed it is a feature
that IP addresses and their definition as an identifier for a node
interface are so flexible that it is possible to contemplate extending
their usage for those purposes.
On the other hand, security looks to me as a good reason for having 
stable end-point identifiers.  If you can securely recognize an 
end-point (with emphasis on the *re-* part of re-cognize), you can 
develop trust.  Trust, in turn, is very handy for lowering 
transaction costs.

Indeed it is, and I would add that end-point identifiers (together with
node interface identifiers) are the thing that system administrators and
system security officers want.
Even facing the danger of opening yet another rat hole, in my opinion
 we should not have a very strict definition for end-point. That is,
 IMHO end-point should and could be a fuzzy concept, somewhat like
the concept of a site is today.
From my point of view, an end-point may be a process, a group of 
processes, a host, or even a server cluster offering services as a 
unit.  To me, it looks like fate sharing and common semantics are the
 key points here.  An end-point should either work or fail, it should
 not be usual for half of an end-point fail while the other half is 
continuing.  An end-point should also be considered at the 
application level as a single unit.

The question that that raises is, whether it is reasonable to use such
an inclusive notion of an end-point, and what is the simplest kind of
structure or object we can use to implement an end-point identifier?  We
might have to restrict the notion of an end-point somewhat.
In my opinion, we clearly need solutions to all of these problems. 
Furthermore, it looks like introducing stable end-point identifiers 
to the stack almost automatically protect applications from the 
changes of IP addresses.  I also tend to believe that stable 
end-point identifiers may also help to build self-organized networks.
 However, IMHO the problem of self-organized networks is more 
researchy in nature than the other two.
With respect to the protection of applications from changes of IP
addresses, the effectiveness of that protection (or its robustness) will
depend on what the final specification of end-point identifiers is.
With respect to the problem of self-organised networks, that might be a
research problem, but I would argue that it is a here-and-now research
problem.
Now, even though I believe that we should solve the problems (and 
apparently believe that there are sensible solutions), achieving 
consensus on solutions that require architectural change may take too
long.  Hence, I also believe that we need some kind of a road map, 
with some temporary or intermediate solutions along the way to a 
more long-standing set of solutions.

I agree, and your statement corresponds to what Keith Moore says about 
the solutions fitting into a framework that is yet to be specified.  I 
believe specification of that framework begins with our defining what an 
end-point is.

Yours sincerely,
Robert Honore.





Re: where the indirection layer belongs

2003-09-05 Thread Spencer Dawkins
Dear Robert,

Dave's MAST proposal was announced at
http://www.ietf.org/mail-archive/ietf-announce/Current/msg25938.html.

It is not entirely clear where this draft should be discussed. I
bailed and sent my comments to Dave offlist, and asked him to reply on
SOME list if my comments were helpful (candidates include MULTI6, but
I can't speak definitively). I'm sure guidance would be graciously
accepted.

Spencer

- Original Message - 
From: Robert Honore [EMAIL PROTECTED]
To: Pekka Nikander [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Friday, September 05, 2003 3:48 PM
Subject: Re: where the indirection layer belongs


 Dear Pekka Nikander,

 Please forgive the late reply.

 Where can I find out more about Dave Crocker's MAST?




Re: where the indirection layer belongs

2003-09-05 Thread Iljitsch van Beijnum
On vrijdag, sep 5, 2003, at 23:15 Europe/Amsterdam, Spencer Dawkins 
wrote:

Dave's MAST proposal was announced at
http://www.ietf.org/mail-archive/ietf-announce/Current/msg25938.html.

It is not entirely clear where this draft should be discussed. I
bailed and sent my comments to Dave offlist, and asked him to reply on
SOME list if my comments were helpful (candidates include MULTI6, but
I can't speak definitively). I'm sure guidance would be graciously
accepted.
Similar stuff has been discussed on the multi6 list in the past, so I 
see no reason why this couldn't be discussed there. Doing so isn't even 
in direct contradiction with the charter.   :-)




Re: where the indirection layer belongs

2003-09-05 Thread Masataka Ohta
Robert Honore;

 (regarding the complexity of putting a general-purpose layer to survive
 address changes between L4 and L7)
  
  
  It is not merely complex but also useless to have such a layer.
 
 
 Right now I am not fully aware of all of the specifics of the issues in 
 trying to implement such a layer, but the statement you make in the 
 paragraph just below does not seem to support your statement that It is 
 not merely complex but also useless to have such a layer.  I will 
 explain my position below.

The issue has been discussed and analyzed long ago.

Just making a complex proposal with new layers, which is know to
be useless, is not a construtive way of discussion and the proposal
should be ignored without much discussion.

  The basic problem of the approach to have such a layer is that
  only the application layer has proper knowledge on proper
  timeout value to try other addresses.
 
 If such a layer is useless as you say, then the application could see no 
 benefit to being able to parametrise the transport or network layers 
 with such information as the proper timeout values to try other 
 addresses.  It would also be impossible for the application to benefit 
 from being able to find other addresses to try.

Read the draft and say TCP.

 Another objection I have to your statement:  If the application layer is 
 the one that has the proper knowledge of things like timeouts (I suppose 
 there are things), then it should be possible to implement something on 
 the stack that is close to the application wherein it can collect that 
 information and parametrise the behaviour of the rest of the lower 
 layers.  If by your statement you are saying that it is impossible to 
 implement such a thing, then you will have to prove it.

Read the draft and say TCP.

  So, the issue can be solved only involving application layer, though
  most applications over TCP can be satisfied by a default timeout of
  TCP. In either case, there is no room to have an additional layer.
 
 I agree completely with your first sentence in this paragraph.  The 
 problem we have right now is that, save for the application specifying 
 to the transport which peer port it wants to connect with, it has no 
 further control over the behaviour of the transport or the network 
 layers other than to possibly drop the connection.

Wrong.  Additional API to existing layers is just enough.

 I would also prefer not to modify any of the 
 libraries or implementations of those protocols, lest we break something.

That is source of your misunderstanding.

Youre requirement is not only meaningless but also harmful. See
other mail on how MAST is not transparent for details.

In short, it is as complex, transparent, meaningless and harmful
as NAT.

 Having looked at draft-ohta-e2e-multihoming-05.txt, I am not convinced 
 that it supports your statement that It is not merely complex but 
 useless to have such a layer, nor do I believe that the presence of 

See above.

  Layering is abstraction and not indirection at all.
  
 Agreed.  We should use the correct terminology.

And a new layer, to which information in other layers are
hidden with the abstraction, is useless to implement
mechanisms using the information.

Masataka Ohta



Re: where the indirection layer belongs

2003-09-03 Thread Pekka Nikander
Dave,

Dave Crocker wrote:
DC In general I suggest we find some specific scenarios that require a new
DC construct for end-point identifiers. ...
Concrete scenarios are very good indeed.

PN On the other hand, security looks to me as a good reason for
PN having stable end-point identifiers.
DC and rendezvous.

DC any reference to an object that requires use outside of an existing
DC context.
Well, I consider an *identifier* as something that is more or
less intrisically bound to an identity and a *name* as something
that merely indicates an entity, i.e., involves indirection.
In other words, an identifier implies sameness and stability of
the identified entity, while a name does not have those connotations
to the same extent.
From this point of view, IP addresses are identifiers.  However, they
are not end-point identifiers but identifiers for topological
locations within the routed network.
Now, you may be able to do rendezvous with just names, e.g.,
with domain names.  And for referencing external objects, it
is often much better to use names than identifiers.  Furthermore,
I find it hard to imagine situations where you want to reference
objects that are really outside of any context; IMHO there is
always some context, and names are always bound to such a context.
PN Even facing the danger of opening yet another rat hole, in my
PN opinion we should not have a very strict definition for end-point.
PN ...
PN  From my point of view, an end-point may be a process, a group of
PN processes, a host, or even a server cluster offering services as
DC Just for fun, let's start by using the term domain name and try to
DC understand why it will not suffice.
DC
DC domain names have been successfully used for all of the examples you
DC give.
In my opinion, domain names are probably good for coarse grain
rendezvous and expecially object reference (e.g. URLs).  They have
their problems in disconnected networks, but LLMNR / mDNS seems
to help there.  On the other hand, domain names are not very good
for security.  You need some external infrastructure, and unfortunately
our strawman economic analysis shows that secure DNS may be
economicly infeasibile.  The cost of security is a crusial issue here.
One of the success factors of SSH has been the low deployment cost.
--Pekka Nikander





Re: where the indirection layer belongs

2003-09-03 Thread Robert Honore
Dear Keith Moore,

Maybe I read your paper on project SNIPE too quickly, but it was not 
immediately clear that the problems you mentioned  were a specific 
result of an attempt to make the application resilient against (sudden) 
changes in IP address.  More specifically, it was not clear from that 
report that the additional complexity did come from the attempt to 
provide the kind of resilience we are seeking, or from the rather 
ambitious goals of project SNIPE.  I will re-read more slowly and 
carefully this time.

Yours sincerely,
Robert Honore.
Keith Moore wrote:
(regarding the complexity of putting a general-purpose layer to survive
address changes between L4 and L7)

 But why do you assert that it will take lots of complexity and 
overhead?  Can you point to some code where they tried this?  As far
as I know, nobody has really given this an earnest try as yet.  At
least not with any IP protocols.


I tried this in connection with a project called SNIPE that I worked on
several years ago.  SNIPE was an attempt to build a very
reliable distributed computing environment that supported, among other
things, the ability to access a computing resource via multiple
addresses (mostly in order to exploit high-bandwidth local networks
not necessarily using IP), and the ability of both hosts and processes
to migrate to other addresses.  It used a DNS-like service similar to
RESCAP (for those who remember that) to register the addresses at which
a process was accessible, and it attempted to provide TCP-like streams
on top of TCP and this registry that would survive changes in those
addresses.  Basically I found that you can get such a service to work
reasonably well and reasonably efficiently if endpoints don't move
without adequate notice.  OTOH if hosts or processes do move without
adequate notice then you end up needing to implement the mechanisms I
mentioned earlier, and that involves extra copies and extra overhead. 

The reason I am preposing that the two problems (changing addresses
with adequate notice and changing addresses without adequate notice) be
treated separately is that by trying to make a single mechanism serve
both purposes you end up with a lot of inefficiency and/or cost that
aren't needed in most cases.  And that's true (for different reasons)
regardless of whether you insert that single layer between L3 and L4 or
between L4 and L7.

What specific glue do you believe it requires for the L4 to L7
approach? 


Thought I'd said this already: buffering of data until acknowledged by
the peer, generation and processing of acknowledgements, retransmission
of lost messages due to broken connections, windowing (so you don't
degrade to stop-and-wait), duplicate suppression.  You also need to
multiplex control and data information over the same stream and to probe
for other addresses when you get a failure on the one you were using.

 How does that compare to what is needed in an L3 solution?


If you work on the problem at (or just above) L3, transport protocols
already have the code needed to recover from lost messages, so you don't
have to reimplement that.  You basically need a mechanism by which the
layer can realize it needs to start routing packets differently, and do
so.  You probably need multiple ways that the layer can get that
information because the remote address can change for a variety of
reasons and in lots of different places in the network.   (That's
equally true for the L4-L7 layer as for the L3-L4 layer, but the L4-L7
layer isn't in a position to get some of that information.  The L3-L4
layer can potentially recover from address changes more quickly but to
do that safely it has to be able to authenticate and establish a basis
for trust in a wider variety of information sources.)

Yes you can do that but it presumes that the host knows a priori
whether or not it needs the stabilization layer.  I would make the
mechanism used to provide stable addresses a property of the
network- either the network provides reasonably stable addresses
(i.e. plenty of prior notice before changing them) or it provides a
stabilization layer.  That way, networks that don't need it don't
pay the overhead.
But I would argue that the host or at least the application's designer
knows whether it will need the stabilisation layer. 


It can't know that reliably unless the network without the stabilization
layer has well-defined properties - e.g. the network won't change
addresses of a network without advertising those changes with a minimum
amount of advance notice.  If addresses can potentially change at
arbitrary times with no assurance of stability then every app needs the
stabilization layer (or provide its own means of recovery).

Making the 
mechanism that provides the stable network addresses a property of the
network leaves the question of how.  Even if that were achieved
though, that still does not completely or effectively address the
problem of one application process identifying its peer across 

RE: where the indirection layer belongs

2003-09-02 Thread Yuri Ismailov (KI/EAB)

No matter where the stabilization layer(s) live, using DNS as a
means to map from identity to locations simply won't work.  It might be
good enough for initial connection (assuming that if a service exists on
multiple hosts, any of them will do), but it's not good enough for
re-establishing an already-open connection, because you might get a
different host the next time.

This is exactly the point!

 But the real question here is: does this new thing have to be a 
 layer? 

It depends on which thing you are talking about.  For the L3-L4 thing,
it's either a new layer or a change to an existing layer.  If you
have both the L4-L7 thing and the L3-L4 thing, the former is either a
new layer or (my personal opinion) a new API that knowledgable apps
call explicitly.

If L4-L7 thing is an API on top of the socket in its current form, then I believe 
that all limitations are still there.
This is implemented and shown to be the fact.

/Yuri




Re: where the indirection layer belongs

2003-09-02 Thread Robert Honore
Dear Keith Moore,

Thank you for your reply.  It seems that we are without a forum though, 
since what we are discussing is, according to Tony Hain, not in line 
with the IPv6 working group charter.  Maybe we really do need a new 
working group for this issue.  Should we propose the formation of one?

Keith Moore wrote:
These capabilities should be regarded as bugs which are being fixed.

In particular, the fact that IPv6 hosts can, in ordinary circumstances,
have multiple addresses has led people to believe that it's reasonable
to expect IPv6 apps to deal with an arbitrary number of addresses per
host, some of which work to send traffic to the destination and some of
which don't, and to have this behavior vary from one source location to
another.  First, nobody has ever explained how these hosts can reliably
determine which addresses will work.  Neither source address
selection nor multi-faced DNS are satisfactory answers.  Second, this
robs apps of the best endpoint identifier they have.
I agree that the bug in this picture here is that nodes can have 
multiple addresses, some of which work and some of which don't work in 
different circumstances.  It really should be that an address which is 
advertised for a node should be valid for communication with that node 
under all circumstances.

I disagree with you if you are implying that the node should not be 
allowed to have multiple addresses on an interface, and I never agreed 
with the notion that the IP adress should double as a node's identifier 
or as the identifier of anything other than the interface with which it 
is associated.  If the IP address as the identifier of the endpoint is 
the reality with which we must live, then I can live with that, but 
don't ask me to consider that the better circumstance.

No, it's not worthwhile.  Any kind of routing needs to happen below the
transport layer rather than above it.  That's not to say that you can't
make something work above the transport layer, but that to do so you
have to re-implement routing, acknowledgements, buffering and
retransmissions, duplicate suppression, and windowing in this new layer
when transport protocols already provide it.
I never said anything about forcing the application to talk directly to 
the network layer as you seem to imply by your second sentence.  It 
might have come out wrong, but what I was trying to say was that for 
applications that need it, what is wrong with there being a standard 
(choose your favourite name) adaptation or stabilisation service or 
interface or protocol between the application and the transport layer 
(like Tony Hain suggests)?

Besides, even if you mean that the presence of such an stabilisation 
layer (to use Mr. Hain's nomenclature), would require the implementation 
of routing, acknowledgements, buffering etc., you are not necessarily 
routing, acknowledging, retransmitting etc., the same data that the 
transport is, are you.  You might be doing that for higher level objects 
where the transmission of any one might have required the establishment, 
usage and teardown of one or more transport connections.  Do you find I 
am engaging in excessive sophistry if I were to argue that if that is 
what the application needs then that application should be able to 
implement it?

Good question.  My best answer so far is: stable enough so that the
vast majority of applications don't have to implement additional logic
to allow them to survive broken TCP/SCTP/etc. connections, or (to put it
another way) stable enough so that failures due to address/prefix
changes are not a significant source of failure for most applications
(as compared to, say, uncorrectable network failures and host failures).
Am I to presume that you are not including in your categorisation of 
uncorrectable network failures and host failures, the possibility of one 
home of a multihomed host going down, while still leaving the host 
reachable by one of its other homes?  As far as I can tell, none of 
that is well addressed as yet.

IMHO, apps should be able to assume that an advertised address-host
binding is valid for a minimum of a week.  This is a minimum - it
should be longer whenever possible.  (however there's no requirement to
maintain addresses longer than the nets will be accessible anyway -
i.e., you don't expect the addresses for the ietf conference net to
remain valid after the net is unplugged...but they shouldn't be reused
within a week either.)
I have no real objection to the address-host binding being valid for a 
minimum of a week or any duration greater than two round-trip times 
node-to-node.  But an address-to host binding isn't really one, is it? 
By that I mean that the real and effective binding is to an interface. 
If I may quote RFC 1883, it defines an address as follows.

   address - an IPv6-layer identifier for an interface or a set of
 interfaces.
While we have regarded the node and its interface as one and the same, 
it really 

Re: where the indirection layer belongs

2003-09-02 Thread Pekka Nikander
Michel Py wrote:
IMHO the only place to put the ID/LOC indirection layer (I would say
sub-layer) that does not break a million things is:
I like the third stack, added to the right, even more.  A kinda
new waist for the stack.  OTOH, I think that most probably
something new is also needed at L4-L7, but I am not quite sure
yet what.
  Current   Model with  HIP model
   Modelindirection
   :   :   :   :  :   :  
   +=+=+   +=+=+  +=+=+
Transport  | TCP | UDP |   | TCP | UDP |  | TCP | UDP |
   +=+=+   +=+=+  +=+=+
   |   |   |  ID/LOC   |  |  ID/LOC   |
Network|   IPv6|   +---+  +=+=+
   |   |   |   IPv6|  | IPv4|IPv6 |
   +===+  ==  +===+  +=+=+
   :   :   :   :  :   :  
--Pekka Nikander





Re: where the indirection layer belongs

2003-09-02 Thread Pekka Nikander
Robert,

Robert Honore wrote:
...  As such, I can distinguish the following issues as aspects of 
the problem given all that was mentioned in this thread, the solving 
the real problem thread and the one on the IPv6 mail list about 
deprecating Site Local addresses and the usage of IPv6 Link Local 
addresses.  They are as far as I can tell the following.

*Stable (or reliable) end-point identifiers
*Resiliency of application (protocol) in the face of sudden IP 
 address changes
*Self-organised networks
I like your analysis very much.  Thank you for writing it up.

However, I also see a kind of causality here:  it looks to me
that stable end-point identifiers are mainly needed to make
applications survive IP address changes.  Dave Crocker's MAST
is a good example how you can do that without having stable
end-point identifiers.
On the other hand, security looks to me as a good reason for
having stable end-point identifiers.  If you can securely
recognize an end-point (with emphasis on the *re-* part of
re-cognize), you can develop trust.  Trust, in turn, is very
handy for lowering transaction costs.
With respect to stable end-point identifiers, we must in my opinion, 
still specify what we are calling an end-point and settle once and for 
all the question of whether an IP address is a suitable candidate for 
such an identifier.
Even facing the danger of opening yet another rat hole, in my
opinion we should not have a very strict definition for end-point.
That is, IMHO end-point should and could be a fuzzy concept,
somewhat like the concept of a site is today.
From my point of view, an end-point may be a process, a group of
processes, a host, or even a server cluster offering services as
a unit.  To me, it looks like fate sharing and common semantics
are the key points here.  An end-point should either work or fail,
it should not be usual for half of an end-point fail while the
other half is continuing.  An end-point should also be considered
at the application level as a single unit.
My question following from all that, are two.  Is it worth it to attempt 
a solution to any of the aforementioned problems?  If so, which one 
should we tackle first?
In my opinion, we clearly need solutions to all of these problems.
Furthermore, it looks like introducing stable end-point identifiers
to the stack almost automatically protect applications from
the changes of IP addresses.  I also tend to believe that stable
end-point identifiers may also help to build self-organized
networks.  However, IMHO the problem of self-organized networks
is more researchy in nature than the other two.
Now, even though I believe that we should solve the problems (and
apparently believe that there are sensible solutions), achieving
consensus on solutions that require architectural change may take
too long.  Hence, I also believe that we need some kind of a road
map, with some temporary or intermediate solutions along the way
to a more long-standing set of solutions.
--Pekka Nikander





Re: where the indirection layer belongs

2003-09-02 Thread Dave Crocker
Pekka,

PN that stable end-point identifiers are mainly needed to make
PN applications survive IP address changes.  Dave Crocker's MAST
PN is a good example how you can do that without having stable
PN end-point identifiers.

In general I suggest we find some specific scenarios that require a new
construct for end-point identifiers. Most discussions treat the
abstraction without being clear about the details of use. With concrete
scenarios, it is possible to consider alternatives... or agree that none
is viable.


PN On the other hand, security looks to me as a good reason for
PN having stable end-point identifiers.

and rendezvous.

any reference to an object that requires use outside of an existing
context.


PN Even facing the danger of opening yet another rat hole, in my
PN opinion we should not have a very strict definition for end-point.
...
PN  From my point of view, an end-point may be a process, a group of
PN processes, a host, or even a server cluster offering services as


Just for fun, let's start by using the term domain name and try to
understand why it will not suffice.

domain names have been successfully used for all of the examples you
give.



d/
--
 Dave Crocker mailto:[EMAIL PROTECTED]
 Brandenburg InternetWorking http://www.brandenburg.com
 Sunnyvale, CA  USA tel:+1.408.246.8253, fax:+1.866.358.5301




Re: where the indirection layer belongs

2003-09-02 Thread Keith Moore
 Maybe I read your paper on project SNIPE too quickly, but it was not 
 immediately clear that the problems you mentioned  were a specific 
 result of an attempt to make the application resilient against
 (sudden) changes in IP address. 

that wasn't quite the purpose of SNIPE.  SNIPE didn't really anticipate
that the network would change IP addresses out from under it, but rather
that processes (and thus their IP addresses) would need to be accessible
from multiple addresses (in order to be able to use the most efficient
network medium available), and to be able to migrate from one address to
another (in order that the system as a whole could continue to operate
for long periods of time without interruption).

still, the mechanisms we built for SNIPE (and then discarded) were very
similar to what people are now proposing as a way to cope with changes
in IP address, and our experience with SNIPE seems like a good
indicator of how well those mechanisms could be expected to work.



Re: where the indirection layer belongs

2003-08-30 Thread Iljitsch van Beijnum
On vrijdag, aug 29, 2003, at 23:06 Europe/Amsterdam, Keith Moore wrote:

It's not uncommon to see a FQDN point to several IP addresses so that
the service identified by the FQDN can be provided either by

(a) multiple hosts, or
(b) a host with multiple addresses.

No.  A client can't tell whether multiple addresses associated with an
DNS name indicate multiple hosts or multiple addresses for the same
host.
Right. And a session can't jump from one address to another either. So 
when we implement the latter, we also address the former.

As a member of the multi6 design team that works on a (b) solution I'm
convinced that such a solution will provide many benefits and should
be developed and implemented.

And I'm equally convinced that a solution that assumes (b) is a
nonstarter.  There is already too much practice that conflicts with
it.
To be more precise: the idea is to have transport sessions move from 
one address to another when there is a rehoming event. Obviously there 
will be changes to the process of publishing additional addresses.

It would make sense to
create something new that sits between the transport(s) and the
application, and delegate some tasks that are done by applications
today, most notably name resolution. This allows us to implement new
namespaces or new address formats without any pain

No it doesn't.  What it allows us to do is impose subtle changes in
the behavior of lower layers, and introduce new failure modes, without
giving applications any way to deal with them.
Well if we make a new API that allows applications to set up sessions 
based on FQDNs, and then later we decide that the next version of IP 
really needs variable length addresses where the address length in bits 
is a prime number, existing applications _should_ continue to work. 
Experience shows there are always cases where things aren't as simple 
as all this in practice, but I don't see how this can be worse than 
being 100% positive that the applications won't work and must be 
changed to support the new address format.

But the real question here is: does this new thing have to be a
layer?

It depends on which thing you are talking about.
New API and/or mechanisms to distribute operations over multiple hosts.

For the L3-L4 thing, it's either a new layer or a change to an 
existing layer.
Agree: both ends must cooperate, so a layer (new or modifications to an 
existing one) makes sense.

If you have both the L4-L7 thing and the L3-L4 thing, the former is 
either a new layer or (my personal opinion) a new API that 
knowledgable apps
call explicitly.
Right, and API != layer.




Re: where the indirection layer belongs

2003-08-30 Thread Keith Moore
 To be more precise: the idea is to have transport sessions move from 
 one address to another when there is a rehoming event. Obviously there 
 will be changes to the process of publishing additional addresses.

I'm also interested in ways of doing this.  I just don't think it's 
appropriate to use FQDNs as the identifiers.

Keith



Re: where the indirection layer belongs

2003-08-29 Thread Keith Moore
 Keith Moore wrote:
  Second, this robs apps of the best endpoint identifier they have.
 
 Rather than being so locked into topology locators as endpoint
 identifiers, we need to be specifying an infrastructure for endpoint
 identifiers and any mapping protocol that might be needed. 

I don't disagree, but the devil is in the details, and you also have to
pay careful attention to transitions.

 To some
 degree this is where the multi6 design team is headed, but they appear
 to have a goal of'transparently' stuffing the result in the lower
 layers. The one thing that is obvious from that approach is that it
 will end up breaking something.

I can't comment on multi6 since I'm not up to date on what they are
doing.

   Any kind of routing needs to happen 
  below the transport layer rather than above it.  That's not 
  to say that you can't make something work above the transport 
  layer, but that to do so you have to re-implement routing, 
  acknowledgements, buffering and retransmissions, duplicate 
  suppression, and windowing in this new layer when transport 
  protocols already provide it.
 
 This is simply wrong. Decisions about topology need to happen below
 the application, but that does not equate to below the transport API,
 unless you hold to the position that the app/transport interface is a
 sacred invariant.

You are missing something fundamental here - if a TCP connection breaks
(isn't closed cleanly) then the two endpoints can get out of sync
regarding how much data was delivered.  You can't fix this at higher
layers without an unacceptable amount of complexity and overhead.
This has nothing to do with the app/transport interface being a sacred
invariant - it happens any time you try to layer something on top of
transport that has to survive breakage of the transport layer.

(how many ways do I have to explain this?)


  If an address becomes invalid 
  because of a topology change somewhere distant in the 
  network, how is a layer above layer 4 going to know about it? 
   It doesn't have access to routing protocol updates - those 
  happen at layer 3 and aren't normally propagated to hosts 
  anyway.  When you think about it, you probably don't want 
  hosts to have to listen for information about distant state 
  changes in the network - that would place a tremendous burden 
  on small hosts and nets with low-bandwidth connections.
 
 Yet you advocate propagating L3 changes to all host stacks so that
 some yet to be defined glue can magically make the change transparent
 to the app.

It's a lot less glue than the L4-L7 approach, and most of it has to deal
with authentication that would be needed for any kind of
remotely-originated routing update anyway, regardless of what layer it
lived at.

 Rather than relying on 'sometimes wanted, sometimes not'
 magic in the lower layers, it makes much more sense to put an optional
 layer above L4 to reopen a path using alternate topology information.

No, it doesn't make sense to add a considerable amount of overhead and
complexity that isn't needed and often won't be used.

 This still allows the app to choose a direct L4 interface, and removes
 the need to have the app say 'give me mapping, or turn it off' in the
 API. That would be implicit in its choice of the stabilization layer
 vs. direct L4.

Yes you can do that but it presumes that the host knows a priori whether
or not it needs the stabilization layer.  I would make the
mechanism used to provide stable addresses a property of the network-
either the network provides reasonably stable addresses (i.e. plenty
of prior notice before changing them) or it provides a stabilization
layer.  That way, networks that don't need it don't pay the overhead.

Keith



RE: where the indirection layer belongs

2003-08-29 Thread Tony Hain
Keith Moore wrote:
 You are missing something fundamental here - if a TCP 
 connection breaks (isn't closed cleanly) then the two 
 endpoints can get out of sync regarding how much data was 
 delivered.  You can't fix this at higher layers without an 
 unacceptable amount of complexity and overhead. This has 
 nothing to do with the app/transport interface being a sacred 
 invariant - it happens any time you try to layer something on 
 top of transport that has to survive breakage of the transport layer.

And apps are in exactly that position today. This is still not a valid
reason to insist on adding complexity into the transport or IP layers. If a
connection breaks before closing, the notification up is 'broken', then
whatever is there has to decide if it wants to try something else. Today the
app has to have the complexity, but there is no reason we can't put that
into a stack layer once for any app that wants it.

 It's a lot less glue than the L4-L7 approach, 

That is a matter of speculation.

 and most of it 
 has to deal with authentication that would be needed for any 
 kind of remotely-originated routing update anyway, regardless 
 of what layer it lived at.

I agree, but the lower layers are not designed with that in mind, so to some
it would appear you want a substantial amount of change to something that is
already working well. Never mind the transition/deployment/interoperability
issues.


 No, it doesn't make sense to add a considerable amount of 
 overhead and complexity that isn't needed and often won't be used.

More speculation. In any case, if it is not used it is simply an idle module
at the top of the stack. 


 Yes you can do that but it presumes that the host knows a 
 priori whether or not it needs the stabilization layer.  I 
 would make the mechanism used to provide stable addresses a 
 property of the network- either the network provides 
 reasonably stable addresses (i.e. plenty of prior notice 
 before changing them) or it provides a stabilization layer.  
 That way, networks that don't need it don't pay the overhead.

Optimizing before the design is started only ensures that something will get
overlooked or dropped as 'too much overhead'. 

I raised the 'right problems' thread after the observation that we had
several embryonic efforts to separate endpoint identifiers from topology
locators, but all of them are focused on inserting complexity into the lower
layers. Those layers don't need complexity to do their job, and any changes
in behavior will break current operational assumptions about how those
layers work. Also everyone agrees that the place that needs the complexity
of the current  future unstable network masked is the application.
Therefore the place we should be focusing solutions is in the interface
between transport  app. 

--- Solve the right problem

Tony






RE: where the indirection layer belongs

2003-08-29 Thread Yuri Ismailov (KI/EAB)
Well, I fully support the idea of a new layer above the transport with new names and 
whatever names resolution system requires. I think that because the idea is hanging in 
the air during so long time and being researched by academy during number of years, 
there is definitely a need for a new group.

There are few, however quite strong arguments for that. First of all why limit 
ourselves with the question where the indirection layer belongs? There is no any 
indication that more than one indirection layer is possible in the stack and if right 
design assumed, there will be no collision rather addition of new functionality.

Secondly, looking far back in time (end of 70's, beginning of 80's) the right problem 
was already addressed by lets say V.Serf, J. Saltzer, D. Reed, D. Clark,  It was 
identified that the current bundling of IP addresses and TCP port numbers allows to 
connect but not allows to reconnect. End-to-end principal, besides all, states that 
the function distribution between layers is one of the most important system design 
principals. Given that, given the fact of multihoming, given the need for mobility 
support, given the need for v6-v4 internetworking, etc., there is a bunch of new stack 
features, which are desirable to implement. New layer design becomes more and more 
demanding.

Thirdly, if to stay below transport layer all efforts will not let go beyond a device 
(host) level. Obviously, there is a need for naming users, devices, content, services 
or anything else one might think about. Just an example (there are hundreds of such 
examples), support for multihoming does not negate move of a flow from one interface 
to another (the reason does not really matter and this is a reasonable service for 
applications). Staying below transport and using current, de-facto endpoint - 
socket, which bundles IP address with port numbers we will not get any further in 
doing that in a good way. This is still multihoming of devices, what about 
multideviced users? If one has more than one device and wants to move a flow between 
devices would this be possible if implemented below transport layer? 

Next, what really surprises, is that all talking about new functionality but none of 
the existing implementations should be changed. If so, we will and already are stuck 
with virtual interfaces, multiple IP addresses (even per interface), sending multiple 
IP headers in packets and end up with hacks configuring and reconfiguring routing 
table without even giving a try to understand that it is not always possible and in 
the cases when possible might be harmful for applications not willing to use those 
services requiring all imaginable reconfigurations.


/Yuri




RE: where the indirection layer belongs

2003-08-29 Thread Yuri Ismailov (KI/EAB)

Regarding (a) and (b) alternatives it would be nice to have both. However it is not 
clear why multihoming for v6 and v4 are different issues. Handling of multiple 
addresses per host is the stack design issue.
The major problem is not to choose the right interface and to send data through it. 
Applications can do this today, for example SO_DONOTROUTE option in WinSock2. The 
issue is to CHANGE between interfaces. Current design is OK as long as there is no 
dynamics.

Honestly, I think that solving one problem at a time may create problems for solving 
other problems.
Given broad changes in the stack - whole IETF work essentially, creates a good 
opportunity to review current design and introduce changes, which allow to avoid fixes.
For example, try to implement and run at the same time more than one draft/RFC 
introducing IP tunneling. I have had quite fun playing around with this and seriously, 
if to look at the real problem, it is the fact that the same name - IP address, is 
horribly overloaded with different semantics.

Why layer? Because if it is not layer, then sockets, in their current form, will still 
be a limit due to the bundling with IP addresses.

-Original Message-
From: Iljitsch van Beijnum [mailto:[EMAIL PROTECTED]
Sent: Friday, August 29, 2003 2:31 PM
To: Yuri Ismailov (KI/EAB)
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: where the indirection layer belongs


[CC'ing multi6 as I don't think everyone there knows about this thread 
on the IETF discussion list, but please remove either ietf or multi6 if 
this discussion continues.]

On vrijdag, aug 29, 2003, at 11:10 Europe/Amsterdam, Yuri Ismailov 
(KI/EAB) wrote:

 Thirdly, if to stay below transport layer all efforts will not let go 
 beyond a device (host) level. Obviously, there is a need for naming 
 users, devices, content, services or anything else one might think 
 about.

But aren't we squarely inside the application domain here?

 This is still multihoming of devices, what about multideviced users? 
 If one has more than one device and wants to move a flow between 
 devices would this be possible if implemented below transport layer? 
 

That's a good point.

But first of all: let's not get too carried away with new and 
interesting problems thereby forgetting about actual needs that users 
have today. We need a way to allow multihoming in IPv6 that doesn't 
have the scaling problems our current IPv4 multihoming practices have.

It's not uncommon to see a FQDN point to several IP addresses so that 
the service identified by the FQDN can be provided either by

(a) multiple hosts, or
(b) a host with multiple addresses.

Now if we want to support moving from one addresses to another in the 
middle of an (application level) session, we have two choices: build a 
mechanism that assumes (a) and by extension also works with (b), or 
focus on (b) exclusively.

It looks like Tony Hain wants to go for (a) while Keith Moore assumes 
(b), hence the insanity claims because a solution for (a), which by its 
nature can only reasonably implemented above TCP, is much more involved 
and less efficient than one that only supports (b), which can work on a 
per-packet basis below TCP and other transports.

As a member of the multi6 design team that works on a (b) solution I'm 
convinced that such a solution will provide many benefits and should be 
developed and implemented.

However, this doesn't say anything about the need for an (a) solution. 
Obviously (a) would be good to have. Peer to peer file sharing 
applications extensively use mechanisms like this, where they download 
parts of a file from different hosts.

Also, the current socket API isn't exactly the optimum way for 
applications to interact with the network. It would make sense to 
create something new that sits between the transport(s) and the 
application, and delegate some tasks that are done by applications 
today, most notably name resolution. This allows us to implement new 
namespaces or new address formats without any pain to application 
developers or users stuck with applications that aren't actively 
maintained by developers. This would also be a good opportunity to 
create sensible ways for applications to ask for advanced services from 
the network.

But the real question here is: does this new thing have to be a 
layer? In the layering system we have today, each layer talks to the 
underlying one on the local system, but it also interacts with the same 
layer on the remote end.

I'm not convinced this is called for here. For instance, a host that 
implements the above could use a new advanced API to open a set of TCP 
sessions to all the addresses that a FQDN points to, and then 
distribute HTTP requests for the different parts of a file over these 
sessions. But on the other end of one or more of these sessions, there 
could be a regular HTTP server that is completely unaware of the new 
mechanisms and uses existing socket API calls.

So yes, we

Re: where the indirection layer belongs

2003-08-29 Thread Keith Moore
(regarding the complexity of putting a general-purpose layer to survive
address changes between L4 and L7)

   But why do you assert that it will take lots of complexity and 
 overhead?  Can you point to some code where they tried this?  As far
 as I know, nobody has really given this an earnest try as yet.  At
 least not with any IP protocols.

I tried this in connection with a project called SNIPE that I worked on
several years ago.  SNIPE was an attempt to build a very
reliable distributed computing environment that supported, among other
things, the ability to access a computing resource via multiple
addresses (mostly in order to exploit high-bandwidth local networks
not necessarily using IP), and the ability of both hosts and processes
to migrate to other addresses.  It used a DNS-like service similar to
RESCAP (for those who remember that) to register the addresses at which
a process was accessible, and it attempted to provide TCP-like streams
on top of TCP and this registry that would survive changes in those
addresses.  Basically I found that you can get such a service to work
reasonably well and reasonably efficiently if endpoints don't move
without adequate notice.  OTOH if hosts or processes do move without
adequate notice then you end up needing to implement the mechanisms I
mentioned earlier, and that involves extra copies and extra overhead. 

The reason I am preposing that the two problems (changing addresses
with adequate notice and changing addresses without adequate notice) be
treated separately is that by trying to make a single mechanism serve
both purposes you end up with a lot of inefficiency and/or cost that
aren't needed in most cases.  And that's true (for different reasons)
regardless of whether you insert that single layer between L3 and L4 or
between L4 and L7.

 What specific glue do you believe it requires for the L4 to L7
 approach? 

Thought I'd said this already: buffering of data until acknowledged by
the peer, generation and processing of acknowledgements, retransmission
of lost messages due to broken connections, windowing (so you don't
degrade to stop-and-wait), duplicate suppression.  You also need to
multiplex control and data information over the same stream and to probe
for other addresses when you get a failure on the one you were using.

   How does that compare to what is needed in an L3 solution?

If you work on the problem at (or just above) L3, transport protocols
already have the code needed to recover from lost messages, so you don't
have to reimplement that.  You basically need a mechanism by which the
layer can realize it needs to start routing packets differently, and do
so.  You probably need multiple ways that the layer can get that
information because the remote address can change for a variety of
reasons and in lots of different places in the network.   (That's
equally true for the L4-L7 layer as for the L3-L4 layer, but the L4-L7
layer isn't in a position to get some of that information.  The L3-L4
layer can potentially recover from address changes more quickly but to
do that safely it has to be able to authenticate and establish a basis
for trust in a wider variety of information sources.)

  Yes you can do that but it presumes that the host knows a priori
  whether or not it needs the stabilization layer.  I would make the
  mechanism used to provide stable addresses a property of the
  network- either the network provides reasonably stable addresses
  (i.e. plenty of prior notice before changing them) or it provides a
  stabilization layer.  That way, networks that don't need it don't
  pay the overhead.
 
 But I would argue that the host or at least the application's designer
 knows whether it will need the stabilisation layer. 

It can't know that reliably unless the network without the stabilization
layer has well-defined properties - e.g. the network won't change
addresses of a network without advertising those changes with a minimum
amount of advance notice.  If addresses can potentially change at
arbitrary times with no assurance of stability then every app needs the
stabilization layer (or provide its own means of recovery).

 Making the 
 mechanism that provides the stable network addresses a property of the
 network leaves the question of how.  Even if that were achieved
 though, that still does not completely or effectively address the
 problem of one application process identifying its peer across the
 network.

Well, there's no way I can supply all of the details in a few email
messages.  I've been trying to find time to write them up.  In the
meantime I want to discourage too much momentum behind trying to solve
all of the address changing problems in any one place - particularly
between L4 and L7, because I already know that that won't work well.

Keith





Re: where the indirection layer belongs

2003-08-29 Thread Keith Moore
 Regarding this discussion about an indirection layer, I am thinking we
 really should propose the formation of some forum for discussion of
 these issues. [...] Call it an indirection layer or a stabilisation
 layer or whatever you want, but we need a forum where we can specify
 the problem we are trying to solve and to consider the possible
 solutions for it.  Does anybody agree?

I don't disagree with the need for a forum at some point, just with the
presumption that a single layer can reasonably solve all of the problems
associated with the various sources of address changes.  So I'd really
push back against an effort to try to accomplish the latter.

Also, experience with the IRTF name space research group (which was
tasked to work on a similar problem, though phrased somewhat
differently) has probably left some people (including probably myself)
feeling a bit ...  well, hesitant.  If a relatively small, select
group of very talented experts couldn't agree on how to solve a problem,
is an open forum consisting of an arbitrary number of people with
varying levels of expertise likely to do better?  Bottom line is that
it's very difficult to reconcile the views of people with experience in
very different parts of the network - apps vs. routing vs. transport -
even if they're all highly competent.  It's probably even more difficult
if you have larger numbers of people and you can't assume the competence
level.

Obviously we need to solve this problem.  We just need to be careful
about how we go about it if we hope to be successful.

Personally I think a forum might be a bit premature, as it would
distract various peoples' energy away from efforts to draft strawman
architectures, and instead tempt them to spend time getting in sync with
the group.  Maybe we could have a BOF in Minneapolis and wait for after
that to formally organize a discussion group?

Ketih



Re: where the indirection layer belongs

2003-08-29 Thread Masataka Ohta
Keith;

 (regarding the complexity of putting a general-purpose layer to survive
 address changes between L4 and L7)

It is not merely complex but also useless to have such a layer.

The basic problem of the approach to have such a layer is that
only the application layer has proper knowledge on proper
timeout value to try other addresses.

So, the issue can be solved only involving application layer, though
most applications over TCP can be satisfied by a default timeout of
TCP. In either case, there is no room to have an additional layer.

I documented it long ago (April 2000) in draft-ohta-e2e-multihoming-*.txt

   The Architecture of End to End Multihoming

(current most version is 05) that:

   To support the end to end multihoming, no change is necessary on
   routing protocols. Instead, APIs and applications must be modified to
   detect and react against the loss of connection.  In case of TCP
   where there is a network wide defact agreement on the semantics
   (timeout period) of the loss of connectivity, most of the work can be
   done by the kernel code at the transport layer, though some timing
   may be adjusted for some application. However, in general, the
   condition of loss of connectivity varies application by application
   that the multihoming must directly be controlled by application
   programs.

Masataka Ohta

PS

Layering is abstraction and not indirection at all.



Re: where the indirection layer belongs

2003-08-29 Thread Keith Moore
well, the reason I named a specific time interval was to provoke
discussion, so I suppose I shouldn't be disappointed...

I am not sure that one week is the best figure.  I imagine that figure
could reasonably be picked to be anywhere between several hours on the
low end to a few weeks on the high end.


However as I've written in other messages today I believe that trying to
use a single mechanism to handle all cases where addresses change is
probably too expensive (either in money or reduced performance or both),
and to me it makes sense to handle two cases separately - those for
which the change is known about well in advance, and those for which it
is not.  The idea is for l3 or l4 to handle the unanticipated changes
and for l7 (or a layer between l4 and l7) to handle those that are
anticipated and announced.

For the anticipated changes, one week notice/stability does not seem
like an onerous requirement.   For unanticipated changes of wired
networks, I don't think providing redirection for one week is too
expensive either.  And for mobile networks I think it's reasonable to
expect them to pay for redirection services.  In this way the costs of
providing the infrastructure necessary to compensate for the lack of
natural address stability (i.e. the ability to move without one week's
notice) can be borne by those who need that service; others need not pay
for it.  Finally, one week is long enough that most apps (i.e.
those whose connections never have to last that long) won't have to
worry about it or pay for the extra overhead.  You could pick a shorter
time interval but it would eventually start to impact a significant
number of apps.

Of course, there will still be apps that need to worry about user
mobility rather than host mobility.  Since the network doesn't know
anything about users, those apps will still need to provide their own
facilities to keep track of users' network locations.

Keith

p.s. and yes, AOL's IM tracking is quite dynamic, but it's also highly
centralized.  based on the experience with ICANN I think it would be
very difficult to manage the political issues associated with a central
Internet prefix mapping service - though the experience with the ongoing
ENUM work seems, at least from my distant vantage point, to be somewhat
more promising.


 as one thinks about VPNs, tunneling, mobility and the like, assuming a
 week is probably a bad idea. Think a bit about the way AOL's IM tracks
 the binding of an IM handle to an IP address - it is quite dynamic.



Re: where the indirection layer belongs

2003-08-29 Thread jfcm
At 20:54 29/08/03, Keith Moore wrote:
Personally I think a forum might be a bit premature, as it would
distract various peoples' energy away from efforts to draft strawman
architectures, and instead tempt them to spend time getting in sync with
the group.  Maybe we could have a BOF in Minneapolis and wait for after
that to formally organize a discussion group?
Dear Keith,
I fully agree with that. However I feel that the IPv6 numbering plan and 
all the related issues are /  have beein discussed a lot in different fora 
without addressing my expectations - and since I am just one inter pares - 
without probably addressong many other ones expectations. This makes all of 
us uncertain about what has been discussed, where, how specialists have 
decided things we think we have to disagree with, where we can contribute 
in wasting the minium time for everyone.

Might I suggest a preliminary approach: instead of trying to propose 
solutions now, why not to try to list all the demands. Independently from 
any solution. Then to try to put some order into them by possible 
solutions. Such a step could be open to everyone. I am sure that many of us 
would leave the rest to specialists should we be sure that our pet needs 
will be supported or will be considered.

Also, I understand that many feel there is a possible need for multiple 
numbering plans. It would be better to clarify that now.

jfc

I must say that this mail is also a test to see if I am flamed because I 
would be - as many other here who do not tell it - ignoring what I should 
know. A good way to learn. IPv6 is not something for specialists to define: 
it concerns everyone, and everything in the network archiecture.





Re: where the indirection layer belongs

2003-08-29 Thread Keith Moore
 It's not uncommon to see a FQDN point to several IP addresses so that 
 the service identified by the FQDN can be provided either by
 
 (a) multiple hosts, or
 (b) a host with multiple addresses.
 
 Now if we want to support moving from one addresses to another in the 
 middle of an (application level) session, we have two choices: build a
 mechanism that assumes (a) and by extension also works with (b), or 
 focus on (b) exclusively.
 
 It looks like Tony Hain wants to go for (a) while Keith Moore assumes 
 (b),

No.  A client can't tell whether multiple addresses associated with an
DNS name indicate multiple hosts or multiple addresses for the same
host.  No matter where the stabilization layer(s) live, using DNS as a
means to map from identity to locations simply won't work.  It might be
good enough for initial connection (assuming that if a service exists on
multiple hosts, any of them will do), but it's not good enough for
re-establishing an already-open connection, because you might get a
different host the next time.

  hence the insanity claims because a solution for (a), 

I don't know whether Tony assumes (a) or not.  But that has nothing to
do with why I made that particular claim.

 As a member of the multi6 design team that works on a (b) solution I'm
 convinced that such a solution will provide many benefits and should
 be developed and implemented.

And I'm equally convinced that a solution that assumes (b) is a
nonstarter.  There is already too much practice that conflicts with
it.  Like it or not, DNS names are no longer host names in practice;
they're service names.  A service can be a host but it's not
reasonable to assume that a service _is_ a host.

 Also, the current socket API isn't exactly the optimum way for 
 applications to interact with the network. It would make sense to 
 create something new that sits between the transport(s) and the 
 application, and delegate some tasks that are done by applications 
 today, most notably name resolution. This allows us to implement new 
 namespaces or new address formats without any pain to application 
 developers or users stuck with applications that aren't actively 
 maintained by developers. 

No it doesn't.  What it allows us to do is impose subtle changes in
the behavior of lower layers, and introduce new failure modes, without
giving applications any way to deal with them. 


 But the real question here is: does this new thing have to be a 
 layer? 

It depends on which thing you are talking about.  For the L3-L4 thing,
it's either a new layer or a change to an existing layer.  If you
have both the L4-L7 thing and the L3-L4 thing, the former is either a
new layer or (my personal opinion) a new API that knowledgable apps
call explicitly.