On Sat, Mar 2, 2019 at 11:50 AM Brian E Carpenter <[email protected]> wrote: > > On 03-Mar-19 06:35, Tom Herbert wrote: > > On Fri, Mar 1, 2019 at 7:18 PM Brian E Carpenter > > <[email protected]> wrote: > >> > >> On 02-Mar-19 14:46, Tom Herbert wrote: > >>> Hi Brain, > >>> > >>> One comment... > >>> > >>> >From the draft: > >>> > >>> "5. Firewall and Service Tickets (FAST). Such tickets would > >>> accompany a packet to claim the right to traverse a network or request > >>> a specific network service [I-D.herbert-fast]. They would only be > >>> valid within a particular domain." > >>> > >>> While it's true that Firewall and Service and Tickets (in HBH > >>> extension headers) are only valid in a particular domain, that really > >>> means that they are only interpretable in the origin domain that > >>> created the ticket. It's essential in the design that FAST tickets can > >>> be exposed outside of their origin domain (e.g. used over the > >>> Internet) and reflected back into the origin domain by peer hosts. > >>> FAST tickets contain their own security (they are encrypted and signed > >>> by agent in the origin network) so there should never be any reason > >>> for a firewall to arbitrarily filter or limit packets with FAST > >>> tickets attached. This technique could probably be applied to some of > >>> the other use cases mentioned. > >> > >> Yes, that's an interesting model: effectively a domain split into various > >> parts without needing a traditional VPN. > >> > >> Of course, there remains the bogeyman of making the Internet transparent > >> to some new unknown option or extension header. I'm pessimistic about that. > >> So far we have had poor success. > > > > Maybe, although I wouldn't phrase it exactly that way. Protocol > > ossification of the Internet and the abandonment of the End-to-End > > model has made evolution of the Internet harder, but I don't believe > > it is yet proven impossible. This goes back to my primary concern that > > if the concept of limited domains is standardized, some people will > > use it as rationalization to justify non-conformant implementation and > > proprietary, non-interoperable solutions as somehow being compatible > > with Internet architecture and ideals. > > I certainly acknowledge that risk; but (having lived with this problem > in some form or other since RFC2101) I really think we can't duck it any > longer. > > Also, we really have standardized limited domains already, in numerous > places - segment routing and detnet being recent examples. I think > ultimately what we're arguing in the draft is: let's do it properly.
Hi Brian, Yes, limited domains in the form of administrative domains and security domains have existed for quite some time (since NAT, RFC1918, firewalls first came into being). Every service provider and datacenter operator have implemented a domain. But those distinguished domains by the operation and use of the protocol, not in the definition of a protocol or its interoperability which seems to be proposed in the draft.. The draft states: "This documents analyses and discusses some of the consequences of this trend, and how it impacts the idea of universal interoperability in the Internet" I don't see much discussion in the draft on the impact of interoperability. This is critical. Interoperability is a core principle in the Internet. If, for instance, a protocol is specified to only work and be interoperable in a limited domain then would it still be called an Internet protocol? (to me, it seems like an oxymoron to call something an Internet protocol that doesn't even work on the Internet). It would good for the draft to elaborate on this. Also, I think there should be some mention here about ossification of the Internet. Will standardizing limited domains mitigate the ossification problem or perpetuate it (AFAICT, right now it seems more likely to be the latter). Tom > > Brian > > Brian > > _______________________________________________ Int-area mailing list [email protected] https://www.ietf.org/mailman/listinfo/int-area
