Hi Med,

Thanks for your comments:

  http://www.firstpr.com.au/ip/ivip/fb/Ivip-arch-03-Med-Boucadair.pdf

Here are my responses to the first part of your comments.

You wrote:

> [Med1] : After reading the document, I have the following comments
> (some of theme may be valid for other CES proposals)
>
>   - a figure to show all involved functional elements would ease
>     the readability of this document.

This would be good, but I find ASCII art diagrams pretty tricky.
There is a simple multihoming service restoration diagram at the top
of the Ivip page http://www.firstpr.com.au/ip/ivip/ .

Further down the page in the Mobility section are two more detailed
diagrams, depicting TTR mobility, but these involve mobile nodes and
TTRs which both ETRs and ITRs - none of which are a part of
non-mobile Ivip.

I would like to make a better diagram - but for the web page, rather
than the IDs.


>   - An example of call flow would be also more than welcome

I guess you mean something depicting the flow of actions.  I will
consider this.  One set of actions is initiated by an end-user or
some company they authorise to change the mapping of their micronet.
 They take some action to change it, and it flows through a UAS
system, to an RUAS, and then typically with other changes is placed
into the payload of a packet which sent with DTLS to each of the five
to eight level 0 Replicators.  ("Launch servers" in the Ivip-arch-03
which you read.)

The four or five levels or Replicators fan packets out, carrying this
payload and (ideally) at least one copy of the payload arrives at a
particular QSD in an ISP network somewhere.  (Packets with the same
payload arrive in potentially hundreds of thousands of QSDs, but only
one would be illustrated.)  The QSD updates its copy of the mapping
database for the particular MAB (Mapped Address Block) these changes
apply to.

That may be all that happens.

However, if this QSD recently sent out a mapping reply to a querier
(an ITR or a QSC - caching query server) about the micronet which
just had its ETR address changed, then the QSD would sent out an
update message to that querier.  If the querier was a QSD, it would
pass the update out to its one or more queriers which it recently
gave the old mapping of this micronet to.  ("Recently" means within
the caching time the QSD set in the original map reply.)

Then, an ITR (or maybe several of them) changes the ETR address it
will tunnel packets to if it receives any traffic packets addressed
to any SPI address matching this micronet.

So nothing more happens until one or more hosts send such packets,
and then the ITR encapsulates them (or uses modified header
forwarding) to tunnel them to the new ETR.

The ETR gets the packet to the destination network.

I think it would be challenging to do this in a bunch of illustrations.

The other major source of action is a traffic packet arriving at an
ITR, addressed to an SPI address (that is its destination address
falls within some micronet, in some MAB) and the ITR recognises it
doesn't have mapping for this address.

So the ITR requests mapping for this address from its nearby query
server.  This might be a full database QSD, or it might be a caching
QSC.  If this has the mapping, it sends back to the ITR a map reply,
with a caching time.  If not (and a QSD always knows the mapping,
unless for some reason it's database for this MAB is corrupt and it
is rebuilding it - in which case it would get mapping from some other
QSD) then the ITR must have asked a QSC.  So the QSC sends a mapping
query, again with this single destination address, to another query
server. Lets say it is a QSD.  The QSD replies to the QSC, with a
caching time and the full details of the micronet which this address
matches.  This is a starting and ending address - and an ETR address
which the micronet is currently mapped to.  The QSD caches this and
passes the information on in a map reply to the ITR.  These map
replies are secured by a nonce copied from the request.

Now the ITR has the mapping. It stores this in its cache and tunnels
the packet, which it has buffered, to the ETR as specified.  This
whole query and response process would take a few tens of
milliseconds, so the delay in tunneling the packet to the ETR is
insignificant.

Subsequent packets going to the ITR which match this micronet will be
tunneled in the same way.

Now, if the user changes the mapping for this micronet, then the
first process occurs, but does not end with the QSD just updating its
mapping.  It also checks its "querier" cache, which tells it that
there is a querier caching mapping for this micronet which has just
had it mapping changed.

Maybe the ITR which requested this mapping is no longer handling
packets to this micronet - but it may be, so the QSD sends a mapping
update message to the querier, which was a QSC.  Map requests,
replies and mapping updates are all UDP packets.  The mapping updates
need to be acknowledged, otherwise the QSD will keep sending them for
a while.

Now the QSC checks its querier cache and finds that this particular
ITR has requested this mapping recently, so it sends a map update
message to the ITR and the ITR alters its cache - and therefore where
it will tunnel packets to if any arrive addressed to this micronet.

None of these updates extend the caching time.  If the original
micronet was split into different micronets, the updates give all the
details, but do not extend beyond the range of addresses covered by
the micronet which was returned in the first map reply.


A picture is worth a thousand words, but a picture for this would be
big and would need about as many words as above.  I could do it with
a large diagram and a laser pointer!


>   - The document does not discuss if single homed networks needs to
>     deploy ITR. Why they have to pay this cost?

I think this is a very good question.  This answer is long, but I
will try to include a shorter version in a future version of the ID.


No network absolutely needs to install an ITR.  If there is no ITR
function in the sending host or in the end-user network of the
sending host, or in the ISP network of the sending host or the ISP
network which the end-user network uses to connect to the Net, then
the packet will go out to the DFZ and be forwarded to one of multiple
DITRs (Default ITRs in the DFZ).  The DITR will tunnel the packet to
the ETR.  There could be various sets of DITRs around the Net, and
one set of them will be advertising the MAB which matches the
destination address of the packet.

Ordinary ITRs are in ISP and end-user networks.  They advertise all
MABs to the local file system and so attract packets sent to any SPI
address.  Alternatively, they advertise the default route, and get
packets which are addressed to SPI addresses as well as those which
are addressed to conventional addresses.  Either way, they tunnel any
packet to an ETR if it is addressed to an SPI address.  (Assuming the
mapping of the micronet is to an address, rather than to 0.0.0.0,
which means the ITR should drop the packet.)  These ordinary ITRs
look for packets addressed to all SPI addresses - they handle packets
whose destination address matches any of the MABs in the Ivip system.

An ITR in a sending host (ITFH) doesn't advertise anything - it just
does the ITR functions on any packet sent by the host which has an
SPI address.

All ITRs, including those in sending hosts, need to obtain from their
local QSD, an up-to-date list of all the MABs.  I haven't detailed
how they will do this yet.  Also, it would be good if ITRs (including
especially ITFHs) could auto discover several nearby query servers in
the local network.  I haven't figured out how to do this yet, though
I guess it could be added to DHCP.

Initially, there's probably no great motivation for anyone to install
ITRs, but any company which is in the business of renting out SPI
space to end-user networks will be keen to have a good system of
DITRs around the Net, widely distributed, so no matter where the
sending host is, the path from the host to the nearest DITR and then
to the ETR will not be much longer than the path from the sending
host to the ETR directly.

DITRs are unlikely to advertise every MAB.  Maybe someone out of the
goodness of his or her heart would like to run a DITR which does
this, and dutifully tunnels any packet addressed to an SPI address to
the correct ETR.  But even though the DITR may be an inexpensive
server, and the DITR might be just a function running on some server,
it still costs money to have it in the DFZ, sending and receiving
packets, and it would need to be accepted as a router - being linked
to at least one other DFZ router, by which it could advertise routes
to all these MABs and so be sent the packets.

The most likely arrangement for DITRs is that they will be run by, or
for, the MAB companies who "own" one or more MABs and rent out space
from them, in small chunks (User Address Blocks - UABs) for end-user
networks to use.  The end-user networks, assuming they get more than
one IPv4 address (or one IPv6 /64) can then split up their UAB into
multiple micronets, if they want.  Otherwise the whole UAB will be a
single micronet the end-user network can map to any ETR it likes.

End-user networks will be paying their MAB companies for traffic
carried on that company's DITRs which is addressed to one of the
end-user network's micronets.  Otherwise, the MAB company could be
paying for a lot of bandwidth for these DITRs and not gaining any
revenue from the end-user network which benefits.

Any DITR is likely to be busy, and it needs a QSD nearby.  So there
would probably be a QSD in the same rack, with the DITR ready to use
a more distant QSD if the local one failed for some reason.  There
could be multiple DITRs, some advertising one set of MABs and some
advertising other MABs, to split the load.  They could be in the same
rack and use the one QSD.


With the initial introduction of Ivip, probably few networks will
feel the need to install ITRs.   When it is more widely adopted,
assuming an ITR is inexpensive, then more would adopt them -
especially ISPs, who could then tell their customers that they are
doing the best to support their traffic, enabling it to be handled by
their own high-capacity ITR rather than going out to the DFZ to a
DITR which is also handling packets from other networks.

There is a further reason why an ISP would want to install an ITR
once Ivip becomes widely used.

The ISP by then would probably have some ETRs in its network.

Assuming there were sending hosts in its network, or in any end-user
networks connected to this ISP, and these sending hosts were
sometimes sending packets to hosts in end-user networks which were on
SPI addresses AND were using this ISP's ETRs . . . then the ISP has a
natural incentive to install a local ITR.  Without the ITR, those
packets are going to go out to the DFZ to a DITR and return in
encapsulated form, being tunneled to one of the ETRs in the ISP's
network.  This would burden the ISP's expensive upstream links to
other ISPs.  With a local ITR, the packets will be encapsulated
locally and tunneled to these ETRs.

Of course the new ITR will also be handling packets addressed to SPI
addresses which are mapped to ETRs all over the world.  So most of
the tunneled packets will go out one of the ISP's upstream links
anyway.

I think that for these two reasons, ISPs would be motivated to
install local ITRs once Ivip became moderately widely adopted.

Furthermore, if the ISP is trying to attract the custom of end-user
networks with SPI space, then it will be making ETRs available (if
that is what the end-user network wants - the ETR could also be at
the end-user site, and just working of the conventional PA address
the ISP's link works from) and making it known that the ISP is hip to
Ivip.  Then of course, the ISP would want to show its commitment by
having ITRs in its network.

ITR functions could be done by existing routers.  Early on, how many
routers, especially older, installed, routers will be able to do
this, I am not sure.  But a perfectly good ITR could be done with
software on a COTS (Commercial Off The Shelf) server, which is
inexpensive - and the ISP probably has a bunch of servers anyway.

To return to your question about singlehomed networks.  These are
end-user networks, with a single link either to:

   (A) an ISP on the ISP's PA address space or

   (B) to an ISP, but with their own PI space.

(I think there's no such thing as a singlehomed ISP network, unless
it is a very small ISP.)

With Ivip, there could be two other kinds of single homed network.
In both cases, the network is using one or more micronets of SPI
space, but is not multihomed.  So the benefit is portability, and
stability of the address space, no matter where this end-user network
connects (via any ISP in the world)  This includes the ability to
have this stable space in small increments, down to a single IPv4
address or IPv6 /64.  This involves much less cost than getting a
whole /24 and wasting most of its space, when the network, such a
branch office of a large company, can run fine on a single IPv4
address or a few such addresses.

   (C) the ETR is in the ISP, and potentially serves other such
       SPI-using end-user networks.  The ETR has some way of driving
       the link to the end-user network site.   This is good for
       scaling, since one ETR on one single conventional IP address
       (PA address of the ISP) can support multiple SPI-using
       end-user networks.

   (D) the ISP provides a link to the end-user site, such as an
       ordinary DSL link, with a single stable IPv4 address.  The
       end-user network, such as a SOHO, factory or whatever, runs
       its own ETR and uses whatever one or more micronets of space
       mapped to this ETR.  (In this case, the ISP has no involvement
       and may not notice what is happening.)

In these cases, the end-user network is not multihomed - but with a
little effort, they could be multihomed in the future, without
altering their addresses.

The motivation would be to have stable, portable address space in
small chunks for a low cost.

In all cases, A, B, C and D, the end-user network doesn't need to get
an ITR.  They may want to get one if their ISP has no ITR and they
are finding that when they send packets to SPI addresses, the nearest
DITR is overloaded so some of their packets are dropped.

If their ISP has an ITR, there's no benefit at all in having their
own.  An ITR needs to talk to a QSD - and that needs to be nearby,
such as in the ISP.  If the ISP doesn't have an ITR, it probably
doesn't have a QSD either.  So it would be tricky for the end-user
network to run its own ITR.  Maybe it could rely on some QSD in a
nearby ISP, by special arrangement.

Running a QSD is a non-trivial thing to do.  They would need a
reliable server and to get access to at least two streams of mapping
updates from two Replicators, ideally two in topologically different
locations.

These will send in packets continually, depending on the rate of
mapping updates - which depends on how widely Ivip is used at the
time.  This traffic may not cost much, but it is still an
administrative responsibility to run a QSD, get or pay for streams
from two replicators and to have it able to access one or more
Missing Payload Servers so it can get payloads of packets which did
not arrive via either stream from the Replicators.

Missing Payload Servers are a new concept - in the new ID which gives
details of the simplified Replicator system, without special "Launch"
servers:

  http://tools.ietf.org/html/draft-whittle-ivip-fpr-00

I just finished this.  I will update Ivip-arch soon to mention these.


In the C and D cases, if the ISP knows about D, then the ISP will be
motivated to install an ITR, because as noted above, it doesn't want
other hosts within its customers' networks sending packets to this
end-user network, which would go out to the DFZ to a DITR, and then
return back to the ISP's network to be forwarded to the ETR.

In the A and B cases, the ISP may not care much about Ivip and ITRs,
as long as it has no customers using SPI space.  But over time, ISPs
would be keen to attract customers using SPI space, so I would expect
most ISPs of any substance to offer ETRs and likewise ITRs to tunnel
local traffic to them directly.

To return to your question again . . .

>   - The document does not discuss if single homed networks needs to
>     deploy ITR. Why they have to pay this cost?

They don't have to get their own ITRs.  They can always rely on DITRs
for whatever packets they are sending to SPI addresses.

If the DITRs are getting overloaded, this would be a reason for the
end-user networks whose SPI space these DITRs are handling, to be
unhappy.  This would put pressure on their MAB company to install
more DITRs, or give them more capacity.  Likewise if the DITRs were
in positions where there were sometimes much longer paths than there
could be if there were more widely dispersed DITRs.

Overall, I envisage that at first most ITRs would be DITRs.

Then, when more ISPs get customers using SPI space, the ISPs will
install their own ITRs for reasons noted above.  Also, ISPs can use
their ITRs to show their customers they are providing a good service.

Over time, more ISPs will have ITRs and the proportion of traffic
carried on DITRs will reduce.  There will always need to be DITRs.

A single-homed end-user network - or a multihomed one - using
conventional PA space, conventional PI space or SPI space, doesn't
really need an ITR.

If the upstream ISP(s) have ITRs which are not overloaded, then all
will be well.  Those ITRs are on-path for where the packets are
heading anyway, so there will be no extra path length.

If the ISP doesn't have ITRs, which is unlikely if it has SPI using
end-user network customers, then the end-user network can let its
packets be handled by DITRs.


>  -The document does not assess the impact of the presence of
>   several LQSD on the validity of the stored information.

I don't clearly understand your question.  An ISP with ITRs would
probably run two QSDs, and arrange for two streams from Replicators
for each, ideally from four not too distant Replicators which
nonetheless were in topologically diverse locations and which had
their own streams from a reasonably diverse set of higher level
Replicators.

Ordinarily the QSDs will have identical mapping data.  If they don't,
it will be because one or both are missing some packets.  This should
take them a few seconds to fix via Missing Payload Servers, unless
they are missing a lot, due to some more serious outage.

The stream of mapping updates will include "snapshot messages" where
the RUAS announces it just saved a snapshot of the mapping for a
particular MAB, which any QSD can download via HTTP.  This message
contains some kind of hash function, checksum or whatever by which
the QSD can check its own copy of mapping for this MAB.  If the check
shows there is a difference, the QSD will need to download a
snapshot, unpack it, apply updates to it which arrived after the
snapshot message - and then it will have the correct mapping for this
MAB again.

Ivip does not have any facility for QSDs comparing notes.  Each QSD
should be able to run independently.  Though they will all need
access to two or so Missing Payload Servers.  One close and one far
away, in another country, would be good - the distant one is unlikely
to be missing packets which for some reason are missing locally,
assuming they are missing due to a local glitch.


>  - QSD may be seen as a single point of failure

Yes, within an ISP and the ITRs the ISP runs, and any ITRs in
end-user networks of this ISP.

For this reason, ISPs should run two of them, and the ITRs should be
able to send queries to either of them.

In the future I will think more about how QSDs respond if they don't
have mapping - such as if they know their mapping for a particular
MAB is wrong, so they are in the process of getting a snapshot.
Maybe the whole database could be unusable because of a loss of
updates from both (typically two) Replicators.

One approach is for the QSD to act like a QSC and send another
mapping query to another ISP, caching the result and passing the
result back to the original querier.

What happens if both QSDs in an ISP are dead?  Maybe there should
have been three - or maybe the ITRs are configured to talk not to
QSDs, but to QSCs - and the QSCs recognise their QSDs are dead and
instead send requests, via special prior arrangement, to QSDs in
another location, such as another ISP's site.  ISPs could have mutual
arrangements to provide backup QSDs for each other.  Alternatively,
someone could run QSDs for these backup purposes, with secure
arrangements so only the QSCs of paying customers could use them.

There are definitely more things to go wrong - so there needs to be
more arrangements for alternatives which will still work, even if
they are a little slower, a little less reliable and perhaps rather
costly.  These backups won't be needed very often.

ISPs already have to ensure their nameservers (local resolvers) and
mailservers are always working.  Likewise their web servers.  So a
QSD is another thing to keep going.  The good thing is that they are
just a server with software - and some streams from Replicators which
may cost something, though probably not much.  So there can be a few
of them, I think, in any substantial ISP.


>  - Due to traffic growth, QSD must be able to handle a big amount
>    of request

Yes, but there can be multiple QSDs and by installing QSCs, the one
QSD should be able to support more ITRs, since the QSCs will tend to
be able to answer the more common mapping queries from their cache,
saving on the number of queries going to the QSD.

>  - Several interconnection layers may be defined: the physical one
>    with BGP interconnection, on top of it service providers who
>    deploy the CES, interconnection between these SP is required.

I don't clearly understand your question.  The BGP and DFZ
connections for ISPs don't change.

An ISP can run ETRs without having to be involved in Ivip in any way.
 In LISP, the ETRs are typically the authoritative query servers for
mapping queries, but in Ivip, they just accept tunneled packets,
detunnel them and then know how to get them to the one or more SPI
using end-user networks which this ETR is supporting.

So having SPI-using customers doesn't absolutely require ITRs, QSCs
etc.  Still, as noted above, it would be best to have these.

Running a QSC means getting two or more streams from level ~4
Replicators.

I expect the RUAS companies will collectively run the Replicators for
level 0 and 1, and maybe some of the level 2 Replicators too.

Each level 0 Replicator will probably be in a different country, but
they will operate as a single system, since they will be fully
meshed, flooding each other with packets carrying the same payloads.

Below that - which means most of the Replicators, since only a few
are on levels 0 and 1 - I would expect ISPs and transit providers to
run Replicators.  It is just a server, and the bandwidth is not
immense, in the context of the data links at these peering points,
major data centers etc.

I would think that one ISP would run a few layer 3 and 4 Replicators
at various parts of its network, and send streams to QSCs in other
ISP's networks not too far away.  In return, those other ISPs would
run Replicators the same way, but from different upstream
Replicators, and send some streams to the QSDs of the first ISP.

There could be commercial services running Replicators, or the RUAS
companies could work together to get Replicators out in many corners
of the Net.  The RUASes would be keen to get ISPs to run their own
ITRs, because this would take the load off their DITRs.

Perhaps this discussion addresses the concern you raised in the
second part of the above question.


>    During the bootstrap, the SPI must be advertised in the core,
>    then it does not solve the scalability issue. This situation
>    will be valid unless a global deployment is adopted

OK - this is a common misconception.

A single MAB is advertised - say 12.34.0.0/16.  That covers 2^16 IP
SPI addresses.  This is a single prefix burdening all DFZ routers,
but it provides space for thousands of end-user networks - SPI space
which is portable and suitable for multihoming and inbound TE.

So this is success.  Only one prefix burden, rather than thousands.
Also, it enables much better use of address space than if all these
end-user networks got their own /24.

You can see from the huge number of /24s which are advertised, far
more than any other length, that there are plenty of networks for
whom 256 addresses is plenty.  Probably many of these networks would
be perfectly happy with just 1 IP address, or 4, or 8 or whatever.

So there could be 10,000, 20,000 - in principle 64k - end-user
networks getting SPI space from this /16, and there is only one
burden on the DFZ.

The fact that multiple DITRs all over the Net advertise this MAB is
not a higher burden on the DFZ.

By "bootstrap" I think you mean getting Ivip going - perhaps until
such time as all end-user networks are using SPI space.

Maybe some other core-edge separation schemes are intended to achieve
a complete separation of all end-user networks onto the new kind of
space.  Not Ivip.

With Ivip, there is no "transition" period working towards any
complete conversion.  The more end-user networks get their SPI space,
the better for scalable routing.

It is OK that some existing end-user networks will keep their current
PI prefixes, using them conventionally, and so burdening the DFZ.
The aim is to stop many more doing this - AND to provide portability,
multihoming and TE to much larger numbers of generally small end-user
networks that would be possible with conventional, unscalable, BGP
advertising of prefixes in the DFZ for each individual end-user network.

Also, it is fine that many end-user networks - primarily small
business, SOHO and home users, will be happy with their current DSL
etc. services with a single PA IPv4 address, fixed or variable on
DHCP.  These are end-user networks too - its just that they don't
need portability, multihoming etc. and their current use of PA space
is perfectly scalable.


>  - Deployability issues: what to do when several version of table
     structures, protocol exchange are to co-exist?

I don't clearly understand your question.  The idea is that Ivip will
be standardised by the IETF and various people will write software,
upgrade routers, provide services etc. which will all work together.


>   - This document encloses some business considerations, this is an
>     added value compared to other proposal but the concern I have
>     is that some statement are subjective.

Sure.  I am trying to keep it brief.  Lacking a time machine, any
statement I make about the future is not based on known facts - so it
is subjective.


>   - How to assess the flexibility of the proposed system. Being
>     part of the system should not lead to a frozen situation where
>     no modification is possible: for instance
>     adding/remove/modifying reachability information of
>     ITR/ETR/QSR/RUAS/LQSR should be doable without impact

RUASes will be added from time to time.  MABs will be added.  There
will need to be some kind of carefully maintained master config file
which all QSDs can download, to tell them about every MAB, which RUAS
is responsible for it etc.  Such things could be updated and
downloaded once a day.  So new MABs and new RUASes could be added on
any day boundary.  MABs could be moved from one RUAS to another, but
this will require some thought - will both RUASes be sending mapping
changes?  There is plenty of work to do.  I am not suggesting this is
a complete design - but I am trying to show the design is workable
and more desirable than the alternatives.  The only real alternative
is LISP, and I think they have not documented as much of the business
relationships for LISP as I have for Ivip.

"QSR" and "LQSR" are not Ivip terms.

There's no concept of "ITR reachability" in Ivip.  RUASes communicate
with each other to some extent, and send packets to the level 0
Replicators, but they are not receiving packets from the Net in
general.  They also connect to UAS systems - and the UAS systems
certainly do accept mapping changes from end-user networks or
companies appointed by end-user networks.

ETR reachability is part of what a "Multihoming Monitoring" (MM)
company does when an end-user network appoints them to control the
mapping of their micronets, in order to achieve multihoming service
restoration.  The MM company has multiple servers all around the Net,
working as a single system.  It probes reachabilty of the end-user
network itself, via its two or more ETRs.  Its not good enough just
to test the ETRs are reachable - some host or router in the network
needs to be reachable through the ETR.

The probing servers tunnel packets to both ETRs, just like an ITR
would do.  They do this no matter what the current mapping is - and
in this example the current mapping tells ITRs to tunnel packets to
ETR-A.  The MM company tunnels to ETR-A and ETR-B, from various of
its servers out in the DFZ.  The inner packet is addressed to the SPI
address of the host or router in the network.  The MM company expects
to get a response back.  (I need to figure out how the response
should ideally come back via the ISP link and ETR which the probe
came through.)

If the mapping is set to ETR-A and the servers detect that the
network can't be reached via ETR-A, but that it is still reachable
via ETR-B, then the MM company sends a mapping change so all ITRs, in
a few seconds time, will be tunneling to ETR-B instead.

Later, once it figures out that the network is reachable again via
ETR-A, it will change the mapping back. Probably the ETR-A link is
faster, or less expensive or in some other way preferable.


> [Med2] : GENERAL COMMENT: I think that a motivation section to
>   question the current needs which justify the introduction of CES
>   or CEE need to be elaborated : what is the trend of PI
>   advertised. If BGP system is able to manage that maximum, then it
>   is more about education effort to enforce best practices of
>   prefix aggregation, etc.

Section 4.1 clearly describes my understanding of core-edge
elimination (CEE) and core-edge separation (CES) architectures, and
the many reasons I chose CES.

The trend in DFZ prefixes is easily visible:

  http://bgp.potaroo.net/

some of these are ISP prefixes, and I think everyone agrees the DFZ
can cope with them and the likely growth in their numbers.

The problem has at least two aspects:

  1 - The PI prefixes are growing in number too fast.

  2 - Even this level of growth means that only a small subset of
      end-user networks which want or need portability, multihoming
      and inbound TE can get it.

Also, some of these end-user networks frequently change the way they
advertise their PI prefixes for TE purposes - and this puts an extra
load on at least some DFZ routers, and perhaps many of them.

CEE or CES could both fix this.  My reasons for choosing CES are
clearly stated in 4.1.


> [Med3] : No value to mention VoIP here.

I mentioned VoIP, just before the word "other" (VoIP was deleted from
 your annotated file) because VoIP is something people like to do on
mobile devices.  It is not necessarily what cellphone companies have
in mind when they sell Internet connectivity to customers using
cell-"phones".  But if people can do their own VoIP and find it
cheaper or better than the normal voice service, they will keep doing
it.  Being able to do it on mobile devices is a reason why people
want mobile devices with IP connectivity.  TTR mobility could give
each device its own portable, stable, IP address or /64.  I think
this would make some VoIP applications easier, since the device can
always be reached via its stable, portable, SPI address.  This is
more direct than the mobile device always being a client, on unstable
addresses, and therefore being much harder to reach for incoming calls.


>  It think that the current trend is inline with this statement. The
>  mobile traffic is drastically increasing. Developing countries
>  adopt mobile infrastructure rather than fixed one.

Yes.  I think that in 2020 or 2025 most Internet hosts will be mobile
devices.

>  In addition, new use cases such us sensor networking and M2M may
>  advocate for more and more devices to be connected.

I am not sure what M2M means, but Internet and mobile devices are
natural companions.  The most mobile I get with the Internet is an
Acer netbook with a little Huawei USB 3G/GPRS modem.  It works like a
charm.   I am keen to get a Google phone.


>> wireless links which are frequently slow, unreliable and/or
>> expensive.

>  [Med4] : In the future, this may not be valid since some ISPs
>  offer already mobile broadband services.

My ISP, Internode, gives me the 3G service for $15 a month, including
500Mbytes.  This is a great deal - but not every service is this
inexpensive.

3G or any wireless service is, in my view, much less reliable than
DSL.  It is much slower, and there are significant latencies, and
extra delays if no data has been sent for a few seconds.  The 3G
modem has to request upstream and downstream bandwidth.  In one
location 3G (WCDMA and HDSPA) was unusable during the day when there
was lots of voice calls.  The connection kept locking up.  So it was
back to 4 channels of GPRS, which probably chews more battery current
and was much slower - but rock solid.

I just think wireless connections are fraught with problems with low
speed, unpredictable speed, etc.

So I don't want to see the Internet changed (as with a CEE
architecture) to something which makes all hosts do more work, do
more lookups, crypto handshakes etc. just to do basic things like
send a packet to another host.  All that extra stuff will take much
longer over wireless links, and I think most hosts will be on
wireless links in 15 years time.

>  [Med5] : MIM techniques or any other technique to ensure
>  robustness of mobile traffic may be envisaged.

I think you mean MIMO - dual antennae for base-station and handset,
with fancy signals to increase data-rate and/or robustness, including
by using reflections of buildings etc.

Yes, but wireless links are inherently variable - and there is bound
to be latency since the upstream and downstreams are usually
time-division-multiplexed with other handset - and even then capacity
in the timeslots needs to be organised beforehand.

>> Below, Ivip is generally assumed to be introduced as a single
>> system for the purposes of solving the routing scaling problem.

> [Med7] : What means system here?

I meant that Ivip is IETF standardized and there is a single mapping
system, which all QSDs and ITRs use.

This is to contrast with one or more companies introducing an
Ivip-like system, most likely to support TTR mobility - but without
any IETF standardization and not necessarily for the purpose of
scalable routing, multihoming for non-mobile networks etc.


On page 10 I had some very loose estimates of mapping update rates.

> [Med9] : I think all this figures are some kind of speculative data
>    Can be removed.

I don't suggest it is "data".  It is just tossing some figures around
trying to estimate, very roughly, how many micronets (or end-user
networks, which is roughly the same number) there would be due to
non-mobile end-user networks, compared to how many there would be
with really widely adopted TTR mobile adoption.

My WAG (Wild Assed Guess) is ~10^7 without mobility and 10^10 with
mobility.  10^7 is easy to do.  I think there's no argument for
LISP-ALT if that is all we are aiming for.  We need to aim for more,
but only if we can support mobility - and if we don't support
mobility it would be a huge mistake.

In the absence of a time-machine (which would return *data* from the
future!), I think it is better to do a WAG and admit it is wild, then
to not think about these things at all.


>>  It would be a private, flexible, arrangement between an end-user
>>  network and a MM company it hires to continually probe the
>>  network's reachability via its two or more ETRs.

>  [Med10] : Why flexible ? compared to what ?

Ivip's approach, where the end-user network would probably hire a
Multihoming Monitoring company to test reachability and control their
mapping, is much more flexible than the way other CES systems work.

These (LISP and APT, most prominently) all lack real-time mapping, so
the end-user networks have no real-time control.  Therefore, all the
end-user network can do is set their mapping so the ITRs will
hopefully, individually, figure out that one ETR is not providing
connectivity and the other is.  The rate and method of reachability
testing can't be specified.  Just testing reachability to the ETR is
not good enough - there has to be a test of connectivity through the
ETR and to the end-user network.  But how can the ITR know which
address in the end-user network to ping?

Nor can the decision criteria in LISP or APT be as flexible as with
Ivip.  The MM company can create almost any kind of decision
algorithm - such as whether to switch mapping for a 1 second outage,
a 10 second outage or whatever.  LISP and APT have to fix very
limited capabilities into all ITRs (Default Mappers for APT) and then
all the end-user network can do is set a few mapping options.

With Ivip, the end-user network has far more flexible control of the
ITRs and it can have the MM company probe reachability and make
decisions in a far wider range of ways than could ever be made
options in the ITR behaviour of a CES such as LISP.


>>   This modular separation of the detection and decision-making
>>   functions from the core-edge separation is good engineering
>> { practice and ensures that the Ivip subsystem can be used
>> { flexibly, including for purposes not yet anticipated.

> [Med11] : Can be dropped

I will keep it - the flexibility is really important and we can't
necessarily anticipate all the future uses for something like this.



>> 2.5. Maximise the flexibility with which ITRs and ETRs can be
>>      located

> [Med13] : What means flexibility here ?

With APT there are only certain places where ITRs and ETRs can be
placed.  I think LISP is pretty restrictive too.

Ivip enables a wider variety of placements including ITRs on SPI
(EID) addresses and in sending hosts.


>>  TTR mobility does not involve mapping changes every time the MN
>>  gains a new physical address, since it continues to use the same
>>  one or more TTRs as its one or more ETRs.

> [Med14] : What do you mean by "physical address" ?

This is probably a confusing term - I will remove it.  I meant that
the MN gets an address which is related to the network it is
physically connected to, as opposed to a "logical" address - a
portable SPI address it gets via the TTR mobility system, which its
applications use and by which other hosts can send packets to it.
That SPI address is completely unrelated to whatever physical network
the MN is connected to.


 - Robin












_______________________________________________
rrg mailing list
[email protected]
http://www.irtf.org/mailman/listinfo/rrg

Reply via email to