On Oct 25, 2010, at 12:53 PM, Fred Baker wrote:

>>>> Regardless, nothing the authors are doing with this flavor of NAT (unless 
>>>> I'm mistaken about it) should break end to end connectivity between 
>>>> devices running IPv6 since it's a 1:1 stateless mapping. A FW with 
>>>> statefull inspection and packet filtering rules would...but in that case 
>>>> the person deploying the FW WANTS that connectivity broken. If you're 
>>>> trying to argue that people should not be allowed to deploy FW's.... well 
>>>> then, good luck with that.
>>> 
>>> Agreed. Not that there are not issues with prefix translation; there are 
>>> applications and application deployments that make the (for the past 15 
>>> years indefensible) assumption that an address carried as a literal in an 
>>> application will be meaningful to its peer, and those applications will 
>>> have the same RFC 2993 problems that IPv4 NAT imposes.
>> 
>> Fred, it's not an indefensible assumption.  It's how the Internet was 
>> designed to work.  it's the behavior specified by the core Internet protocol 
>> standards.
> 
> It also is definitely *not* the way the Internet has been deployed for the 
> past fifteen years. Anyone that puts IPv4 literals into HTTP or SIP/SDP 
> headers *has* to use a global address, or anyone across a NAT can't 
> communicate with them. What I am pointing out, in case you missed it, is that 
> there is also a problem in the transition/coexistence phase with the 
> assumption that just because I speak IPv[46] that you can speak IPv[46] and 
> that there is a reliable route using IPv[46] between us. Putting an IPv[46] 
> literal into an application header is not only stupid because of the coupling 
> implied - that your address space and mine overlap - but because of the 
> coexistence coupling.

Oh, of course.  If one peer is speaking only v4 and the other only v6, then 
there has to be some sort of NAT or proxy between them, and trying to do 
referrals across a v4/v6 boundary using the traditional host interface isn't 
likely to work.   What is really needed is to define a general purpose and 
universally supported way for a v4-only host to open up a connection to a v6 
address, and vice versa.  Absent that, referrals need to contain both a 
(global) v4 and v6 address so that any peer can deal with them...though I don't 
see what any of that has to do with NAT66.

(And there's nothing at all that's stupid about assuming that your address 
space and mine overlap.   That's one of the fundamental assumptions of IP.  
What's stupid is to assume that we can somehow manage to have apps interoperate 
in a world where every network is free to define its own address space.)

>> Applications need to have a predictable, well-documented environment in 
>> which to operate.   Why in the world you think that networks should be able 
>> to violate the standards  and that somehow it's the application developers' 
>> responsibility to adapt to whatever random brain damage the network 
>> operators decide to impose is beyond me.  But it makes no sense from an 
>> engineering perspective.
> 
> Perhaps. It is, however, reality.

Which RFC documents "reality"?  Or to put it another way, where is the document 
that explains what applications are expected to do to cope with this "reality" 
of which you speak?  And absent such a document, assuming that someone manages 
to write an application that copes with present-day reality, what's the chance 
that the application will continue to work next year?

We used to have a clear separation of function between hosts/applications and 
the network.  The hosts/applications gave traffic to the network to be 
delivered to destination addresses; the network made a best effort to deliver 
them intact to those addresses.   The network could lose or duplicate packets 
but could not deliberately alter them.  That was a very simple and elegant 
separation of function with a very useful result: if the network couldn't get 
some packets to their destinations, it wasn't the responsibility of the hosts 
or applications to deal with it - since the network was already doing its best 
effort and the network was in a better position to get them there than the 
hosts.   The hosts/apps had the responsibility to deal with the occasional 
dropped packet or connection but not to try to build their own packet routing 
systems. 

Now we have a world where network operators feel free to alter packets for any 
reason that they see fit, not just including NAT but also interception proxies, 
traffic shapers, connection forwarders, etc.  And application writers are 
expected to cope with this undocumented "reality".  So, for instance, they 
become port agile (even when they have assigned ports), they tunnel traffic 
over HTTP, they define their own address spaces, they do their own routing, 
they set up dedicated intermediaries residing in global address space that can 
do referrals and traffic forwarding, they to obscure their traffic so that the 
intermediaries imposed by the network don't screw them up.  And "reality" just 
keeps getting more complicated every year.  NATs aren't the only things to 
blame, but the fact that there are other culprits doesn't lessen the 
culpability of NATs.

This isn't at all specific to NAT66, but it applies here as much as it does 
anywhere.  If NATs are going to be in the network long term, then they need to 
be explicitly visible, and there needs to be a standard (as in well-defined and 
well-documented), universally applicable, viable way for applications to do 
what they need to do that works both when NATs are not present and when they 
are.  And that includes referrals.

> 
>> If there really is a need for NATs in the network (which has not been 
>> established) then it's incumbent for IETF to standardize them in such a way 
>> that applications still have a predictable environment which lets them do 
>> what they need to do (modulo policy).  So far, IETF has not done that.  What 
>> it has mostly done is fail to provide clear direction.
> 
> Well, it actually has been established. That's the point the people you are 
> so actively not listening to are trying to make. The need for address 
> translation has nothing to do with topology obfuscation, although that point 
> is frequently brought up; if I can read your SMTP envelopes I can figure out 
> your topology well enough for any nefarious purposes I might have. And it has 
> nothing to do with address amplification, which I suspect is what you're 
> responding to as "unproven". It has to do with the sizes of route tables in 
> the core and the politics of attachment at the edge. 

> 
> If we use the existing prefix allocation policy assumptions, which are built 
> around enumerating the objects at the edge, we wind up with as many routes in 
> the route table as we have objects at the edge. Aggregation of routes, which 
> is what we use to make routing scale in the Internet, simply doesn't exist. 
> Current estimates I see talk about 10^7 routes within 15 years, and in a 
> world of 10^10 people I suspect it's a lot more than that over time. If you 
> want to pay for the silicon, heat, and power, I'll happily sell you the 
> routers, but it turns out that you're talking about a lot of silicon, a lot 
> of heat, and a lot of power, which is to say a lot of money. Transit 
> providers are looking for solutions there.

And somehow we're going to solve this problem by pushing it to hosts and 
applications?  Without even giving them a global address space to work with?  

> At the edge, there is a huge push for PI addressing. You can say that 
> shim6-or-etc enables the edge networks to be independent of their providers 
> by having a prefix from each of them; the edge networks are voting with their 
> feet in that regard and saying it's not a network they're willing to operate. 
> They want address independence, and they don't want to have to do anything 
> when they change providers. They see PI addressing as the solution.

I think the general statement is that everybody wants routing scaling to be 
somebody else's problem.  The transit networks don't want to change how they 
route packets, the enterprise networks don't want to accept PA addressing, the 
application writers don't want to have to try to probe the network to figure 
out which combinations of source interface, source address, destination 
address, and (sometimes) intermediary will get the traffic there. 

> And I will argue that ILNP (which requires every host in the world to be 
> upgraded and therefore is IMHO a non-starter, although as a solution I prefer 
> it)

haven't looked at ILNP (I will) but I seem to recall that the hosts deployed 
IPv6 far, far ahead of everything else.  

> or NAT66 as proposed in this document (which reduces the problem to one of 
> updating CPEs by updating the checksum) handles both issues. The transit 
> networks get to view their route tables as entirely PA, and therefore as 
> having been advertised by O(10^4) ISPs or large corporate networks, and the 
> edge networks get the address independence they desire. That may be "broken" 
> from your perspective, but from the guy-paying-the-money's perspective that 
> is not broken. It is exactly what they are looking for.

depends on what business you're in, doesn't it?  if you're in the business of 
writing applications, it doesn't look very good at all.

Keith

_______________________________________________
nat66 mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/nat66

Reply via email to