Hi Bill,

How's that different from, say, cnn.com, aggregated with the other
Turner services and spread across two of Turner's three unicast
routes? If TCP was stable over anycast, they might actually be able to
drop one of those unicast routes.


So, Turner is aggregating (hopefully) large blocks of end-hosts behind each of those prefixes. This aggregation is purely a network layer issue and is not tied to transport or applications.

Anycast looks to be a pure application/service layer issue. Yes, ok, you COULD aggregate anycast for common services that have the same source. I don't see how that helps the general case of services that are supported by multiple sources (e.g., DNS, NTP). Those don't seem to aggregate at all.

In some sense, you could draw the analogy between anycast and a promiscuous use of PI addresses.


Seems to me that the aggregability of services has more to do with the
size of the services set than it does with whether the service is
unicast or anycast. Big important web site = at least one more route
in the table.


Who decides what's big and important? And how does that not end up in an international court (e.g., ICANN). If every web site ends up with a route, then it's game over.


I think of deterministic as "pick one direction" while
non-deterministic is "travel all valid directions." I'm open to a
better word choice.

Non-determinism in the classical CS sense would be 'travel in the right direction' where the direction is determined without regard to the content of the packet and state in the network and yet somehow, the right decision is always made.

I'd favor 'unicast' for "pick one direction" and 'multicast' for "pick all valid directions".


I had a discussion with a colleague earlier today. He was having a
problem with a mapping app on his web site going through a particular
proxy server that didn't support http keepalive. To render a map page
with all the plotted points, he would generate around 500 ssl http
requests. With keepalive this would generate around 30 ssl
connections. Without keepalive it generates 500 and runs so very much
slower.

Where's the error here? The proxy that doesn't support keepalive? Or
the app that needs to generate 500 https queries in order to render a
web page? Which one do you zero in on and change?


Having to have proxies (ALG's) in the network in the first place. Evil, evil, evil. Yes, I have to live behind one.


The upper layers set requirements on the routing layer in a manner the
routing layer can't handle efficiently. One answer is that we redesign
the routing layer. Another is that we alter the requirements. Ruling
either answer out of scope would, at the very best, be failing to
think outside the box.


Ok, but asking the routing layer to defy the laws of physics (i.e., scale without aggregation) is unlikely to be productive.

Tony

_______________________________________________
rrg mailing list
[email protected]
http://www.irtf.org/mailman/listinfo/rrg

Reply via email to