Hi Dave, Thanks for the thoughts. Please see inline.
Adding RTGWG since Jeff did so after a later post, so trying to widen the reply audience. Best, Dirk From: David R. Oran [mailto:[email protected]] Sent: 24 February 2022 23:43 To: Dirk Trossen <[email protected]> Cc: Dirk Kutscher <[email protected]>; [email protected] Subject: Re: [icnrg] recasting address semantics On 24 Feb 2022, at 5:45, Dirk Trossen wrote: Hi Dirk, all, Interesting post, indeed. I came across it shortly after posting since Geoff has been engaged (as he alludes to in the blog) in the internet addressing discussion in the INTArea, so he'd fed back his view of "perhaps an address these days is just an ephemeral transport token that distinguishes one packet flow from another, and really has little more in the way of semantic meaning. " to us in the discussions on the ongoing drafts in the INT area. Well, I’d say that it’s a 5-tuple that distinguishes flows, while a single IP address identifies one (or a set of equivalent) attachment points to a network. That seems to be unchanged since the earliest days of IP networking. When used as a destination address, the expectation is that some of you packets going there will get delivered as long as the network is “working”. When used as a source address, it means that if somebody got a packet with this, if it sends a packet to that address as a destination, it’s reasonably likely it will get to that attachment point. So, that’s the “semantic meaning” of an address (as one would say in the department of redundancy department). As I interpret this, the crux of Geoff’s point isn’t that the semantics are wrong, or somehow different from the early days, but that the utility of IP addresses as important stable identifiers has declined to the point of near irrelevance, and attempts to “reinvigorate” the role of IP addresses for sophisticated services or as identifiers of more than ephemeral value to higher layers are a waste of effort. Modern transports like QUIC already accommodate ephemeral IP addresses through multi path and path shifting. Layers above transport no longer expect that arbitrary pairs of endpoint interfaces can directly exchange packets and instead build their own logical topologies through proxies or application intermediaries running in data center virtual machines. Now, it’s somewhat of a leap from this to say that it’s a total waste of time to try to enhance IP layer routing to do a better job of delivering packets among network interfaces because the problems of addresses becoming ephemeral mitigates against attempts to be overly clever, while at the same time all the interesting stuff happens on top. However, it isn’t that much of a leap and I’m more than little sympathetic to Geoff’s analysis here. But I find this insight little interesting, I must say, since depending on where you are looking from, this isn't particularly interesting at all but merely a recognition of how 'service dispatching' at higher layers works once I have arrived at the network location where this problem needs to be dealt with. I may be misinterpreting you here, but I read this as being exactly backwards. As opposed to what you said, I think first you do “service dispatching” and then you figure out how to get packets to where you dispatched the service. [DOT] It is purposefully backwards from the perspective of how I interpret approaches that simply focus on funneling traffic to the DC and let the DC do the dispatching job. Radical propositions like CloudFlare’s ‘single IP for millions of services’ reduce the role of routing of laying a pipe to the DC. The result looks backwards to how I see service dispatching to be done (i.e. along the lines you point out). Or, if you want to get really sophisticated, you do some joint scheduling that both instantiates computation and installs routes to get packets there. This however is not something you want to delegate to routing, as the owners of the computing resources and the owners of the communication resources are not necessarily the same, which argues for this to be a layer 7, and possibly a layer 8 function, and not a layer 3 function. [DOT] As Brian points out in a later reply, L3 may play a role here, particularly when the scheduling decision is among distributed network locations and possibly dynamic in nature. Of course, L7/L8 mechanisms do play a role in setting up computation resources and signaling constraints of their use (captured in the CAN pre-BoF as ‘compute-aware networking’). So it isn’t about delegating dispatching as a whole to L3 but about the role L3 may play in the overall process. Some of the most illuminating examples here are in the large-scale Video streaming systems, which combine the management of origins with the allocation of CDN resources, coordinated with the edge delivery from access network POPs. A whole business has grown up around this - e.g. Conviva (https://www.conviva.com) Of course, this view aligns with Geoff's other expressed view that "Internet communication" has converged to a centralized PoP-based model (see, among other occasions, at https://github.com/danielkinguk/sarah/blob/main/conferences/ietf-112/materials/Huston-2021-11-10-centrality.pdf). So when considering his observation on that aspect of centralization, the task of reaching the place of where dispatching to the right service takes place has become rather trivial and it all boils down to his first quoted statement that "perhaps an address these days is just an ephemeral transport token that distinguishes one packet flow from another, and really has little more in the way of semantic meaning. ". In the context of that centralization observation, however, I find it somewhat disturbing to position the development of IPv6 as "...an IETF-led command-and-control effort that attempted to anticipate future industry needs and produce a technology that would meet these needs". I personally do not believe that this was wrong as an effort and even goal, on the contrary. A time traveler would have gone back and whacked the Berkeley students who invented “get host by name” in the sockets API upside the head. Exposing IP addresses above the bottom of the transport layer basically doomed IPv6 from rapid adoption by requiring you to upgrade the network, the host operating systems, and the applications all at once. What I find wrong is to manifest the observation of "Today's Internet is dominated by Content Distribution Networks (CDNs) and their associated 'clouds'." as an apparent guidance for developing Internet technologies that may want to go beyond this model since it bears the danger of skewing technology development along a dominant economic model. I think there are three things to be disentangled here: 1. Physical centralization in data centers as the economics of computing has strongly favored that deployment model. [DOT] no disagreement there 1. Administrative centralization where there are only a few dominant players, because all of them were able to leverage existing investments in their computing infrastructures to capture large market share (e.g. Google through search, Amazon through product sales and fulfillment, etc.). [DOT] again agree. 1. More sophisticated applications requiring multi-party distributed operation rather than simple pint-to-point data transfer. This could be just for scaling (e.g. machine learning, map-reduce data analysis), for robustness against outages and unreliable performance characteristics (e.g. Akamai CDNs), or for economics of communication pricing (video CDNs). Some would say that either 1 or 3 inevitably leads to 2 unless mitigated by laws or regulation. I don’t take a position on that, but whether or not we are doomed to be stuck with #2, the other two phenomena by themselves reduce the importance of layer 3 routing services/capabilities relative to those done above. [DOT] My main point was that the view on “today’s” economic situation is possibly not the best starting point for developing something forward looking, including on L3 routing – see my point below. I would argue that the strength of the Internet's original innovation philosophy to enable serendipitous discovery and permissionless innovation is something that may go down the drain with that view. Or vice versa, I wonder if most of what we enjoy as 'the Internet' today would have emerged if this attitude had been at the forefront of its origin some 40 odd years ago (I can hear similar "Today's..." statement, just from a different community back then, wondering what those IP people really want...). Maybe we simply observe the innovator's dilemma here? No, there’s lots of innovation happening. Just not in L3 routing. The few substantive attempts to change the architecture at layer 3 to better match the evolution of the Internet have not gotten much if any traction (e.g. ICN, XIA, Mobility First), but at least they didn’t start by accepting the constraints of the IP packet format nor the addressing and routing mind set it engenders. Sorry for the long ramble. The more I read about the “semantic routing” work the more pessimistic I am that it will lead anywhere useful. Perhaps you will prove me wrong in the end. [DOT] Time will tell although, related to this discussion, this wasn’t about semantic routing. As you know, I’ve been active in L3 innovation for some time, including ICN, and its lack of traction in favour of a dominant economic model has been troubling me all along. Best, DaveO Best, Dirk -----Original Message----- From: icnrg [mailto:[email protected]] On Behalf Of Dirk Kutscher Sent: 24 February 2022 11:19 To: ICNRG <[email protected]<mailto:[email protected]>> Subject: [icnrg] recasting address semantics Hi ICNRG, you may find Geoff's recent blog post interesting: https://blog.apnic.net/2022/02/01/address-semantics/ Best regards, Dirk _______________________________________________ icnrg mailing list [email protected]<mailto:[email protected]> https://www.irtf.org/mailman/listinfo/icnrg DaveO
_______________________________________________ rtgwg mailing list [email protected] https://www.ietf.org/mailman/listinfo/rtgwg
