Hi Wendy,

On Fri, Jul 11, 2014 at 02:27:40PM -0400, Wendy Roome wrote:
> Sebastian,
> 
> How would you see the tracker doing that? Would it really merge maps from
> N separate ALTO servers into one master map? Given that each ALTO server
> can define PIDs however it wants, is that even possible?

Non-aligned PIDs is a problem, but I think that could be solved if it
were the only problem.  The tracker could create a local PID for every
prefix learned from any remote network map, and then distribute the
incoming cost maps on these local PIDs. In the extreme case, it could
create a 2^32 x 2^32 map and store the cost for every pair of IP
adresses (with some sort of compression, 2^64 or even 2^256 is quite a
number of cells).  

The more interesting question is how to handle contradictory entries for
the same normalized (src,dst)-pair if received from different ALTO
servers.

So ...

> Or do you see the tracker as maintaining copies of each of the N ALTO cost
> maps separately, and using the one that is closest to the requesting peer?

... yes.  And the total information contained in these many small maps would
probably be equivalent to a, say, 5000x5000 matrix.

> Eg, when a New York peer asks for peers, the tracker uses the cost map
> from the ALTO server nearest New York. For a Tokyo peer, the tracker uses
> the cost map from the ALTO server nearest Tokyo. Then the individual maps
> wouldn??t be as large. But there would be a lot of them. And the tracker
> would have to decide which map to use. Perhaps it could use the network
> map which has the finest detail around the requesting peer??s address?
> Whatever that means?

Basically yes, but I am a bit sceptical that your example thinks too
much in terms of geographical location instead of network topology and
administrative domains.
Who would be the operator of the "New York City Area ALTO Server"?

My vision of a deployment scenario with distributed ALTO knowledge is:

Every operator of an access network can publish (e.g., via the DNS) for
each of their prefixes, which ALTO Server (i.e. IRD URI) is in charge.
These ALTO servers will deliver very sparse cost maps, which are
de-facto only a "cost from us to anywhere"-vector.

If I am a network operator in, say Europe, I know the costs from my
access networks to the rest of the Internet (both in terms of routing
protocol costs and other traffic engineering parameters as well as
monetary costs for network interconnections).  But from where would I
know the cost from, say, Tokyo to New York City (or, to be more precise,
from a sepecific access network in Tokyo to a specific access network in
New York City)?  And even if I knew, why should I bother publishing it?
I might even run into legal liabilities if I gave bad advice?! That's
why I would only publish a "from us to anywhere"-vector (or a sparse
"from our prefixes to anywhere"-matrix).   This vector would
be rather detailed about possible destinations in my vicinity and less
detailed about other continents or difficult-to-reach networks.

And then, when a peer asks the tracker, it would be the task of the
ALTO client in the tracker to do a "back-connect" to the ALTO server
that is in charge for the peer's prefix, to get the appropriate cost
vector.

> Yet another approach would be for the tracker to require each peer to
> provide the uri for its local ALTO server. The tracker would send an ECS
> to that ALTO server to evaluate the costs of the other peers. Of course,
> then the tracker must make an ALTO query for every peer request, which
> will add a lot of latency to each request.
> 
> Although ... heh heh ... that could be one way for a tracker to do
> cross-domain ALTO server discovery!

Yeah, I know.  We've been discussing this back in 2009
(http://tools.ietf.org/id/draft-kiesel-alto-3pdisc-00.txt , Approach #5).
This approach requires changes to all existing (P2P) application
protocols that want to benefit from ALTO.


Thanks
Sebastian

_______________________________________________
alto mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/alto

Reply via email to