On Mon, Jul 14, 2014 at 11:25:27AM -0400, Wendy Roome wrote:
> On 07/14/2014, 08:02, "Sebastian Kiesel" <[email protected]> wrote:
> >On Fri, Jul 11, 2014 at 02:27:40PM -0400, Wendy Roome wrote:
> >> Eg, when a New York peer asks for peers, the tracker uses the cost map
> >> from the ALTO server nearest New York. For a Tokyo peer, the tracker
> >>uses
> >> the cost map from the ALTO server nearest Tokyo. Then the individual
> >>maps
> >> wouldn??t be as large. But there would be a lot of them. And the tracker
> >> would have to decide which map to use. Perhaps it could use the network
> >> map which has the finest detail around the requesting peer??s address?
> >> Whatever that means?
> >
> >Basically yes, but I am a bit sceptical that your example thinks too
> >much in terms of geographical location instead of network topology and
> >administrative domains.
> >Who would be the operator of the "New York City Area ALTO Server"?
> 
> I agree. I just used geographic distance as a simple metaphor for
> topological or administrative distance.
> 
> But the question then becomes given a set of N Network Maps, how would you
> decide which one is best for costs relative to (say) 1.0.0.0? 

I would use the xdom-disc (formerly called 3pdisc) algorithm and perform
some DNS queries in order to find out whether the "owner" of 1.0.0.0
(i.e., the ISP, IT department, or whoever controls reverse DNS for that
address) has configured an ALTO server to be used for the optimization
of traffic from/to this IP address.

If no such specific ALTO server can be discovered, maybe there is a
fallback server with some (coarse grained) global knowledge. This would
have to be discovered by some other means.  If that fails, too, I'd give
up and go on without ALTO guidance.

> One
> possibility is pick the map with the longest CIDR that matches 1.0.0.0.
> However that favors sloppy maps that haven¹t aggregated adjacent CIDRs.
> And of course that map might not be unique.
> 
> One way to filter out sloppy maps is to start with the set of all map(s)
> with the longest CIDR that covers 1.0.0.0. Then add any maps with CIDRs
> that cover 1.0.0.0 but are only slightly shorter than the longest one. Eg,
> if the longest CIDR is 1.0.0.0/24, include maps with 1.0.0.0/22 and
> 1.0.0.0/23, but not 1.0.0.0/16. Then for each of those network maps,
> estimate the size of the PID containing 1.0.0.0 -- that is, estimate the
> maximum number of endpoints in the PID. Then pick the map with the
> smallest PID containing 1.0.0.0. If that¹s not unique ... I dunno.
> Randomize??
> 
> Estimating PID size might work for ipv4, because by this time, I think
> assigned ipv4 addresses are uniformly dense. But I don¹t think it would
> work as well for ipv6.

I think I need some more time to think about what could happen (and
possibly go wrong) when using this strategy.  The possible results are
probably too unforeseeable to write this down as a standard.




So far, my idea was that we have basically two different deployment
scenarios for ALTO servers (in an Internet-wide context, others may
apply for the CDN use case, etc.):

1.) An "altruistic and objective organization" (e.g., the same guys that
run large P2P trackers) operates a single ALTO server (of course
replicated for availability and load balancing) with more or less
coarse grained global knowledge.  Just discover it and ask...

2.) The ISPs publish "vectors" (see my mail from earlier today) to be
used for optimization of traffic from/to their customers. Use xdom-disc
to find the "right" ALTO server providing the "right" map/vector for
a given resource consumer.


Do we have more realistic scenarios, that need different discovery
mechanisms?


Thanks
Sebastian

_______________________________________________
alto mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/alto

Reply via email to