Hi,
(discussing this in int area rather than behave following the guidelines
of the ADs)
Dave suggested that we could use v4-mapped addresses to represent v4
addresses in v6 land, when using the nat64.
This has the advantage that dual stack nodes will automatically prefer
native v6 connectivity cause the rfc3484 default policy table implies that.
(the main argument is that if we use a different prefix, dual stack
hosts will need to implement some way to determine which is translated
connectivity vs translated connectivity, so legacy hosts, will not be
able to distinguish that and may end up using translated connectivity
and the apps that don't work through nat would fail)
We have been doing some tests about how current OSes work with v4-mapped
addresses and the results are not very encouraging.
We (well, mostly Iljitsch actually) have made the following simple trial
scenario.
We have a IPv6 server and we have created the following RR:
All under runningipv6.net:
test1 IN AAAA ::ffff:83.149.65.1
test2 IN AAAA ::ffff:83.149.65.1
IN AAAA 2001:1af8:2:5::2
note that ::ffff:83.149.65.1 is unreachable in IPv6, but 83.149.65.1 is
reachable in IPv4.
So, when we have a dual stack machine as a source, the following happens
when we try to go to these FQDNs using Safari on Mac or IE on Vista (we
captured the packets to determine which IP version was used):
Both Windows vista and MacOS Leopard Dual stack connecting to test1: it
works so it sends IPv4 packets... this seems reasonable, but i wonder if
it is always ok.
Both Windows vista and MacOS leopard Dual stack sending to test2: it
works and it they both send IPv6 packets, (this is what we want!)
When we have a source machine that is v6-only (i.e. the v4 stack is
disabled), the following happens:
MacOs Leopard v6-only stack sending to test1: doesn't send any packets
Windows vista sending packets to test1: doesn't send any packet
MacOs Leopard v6-only sending packets to test2: stalled and then works
using v6 packets, so it seemed it has preferred first the mapped address
and then fall back to the real v6
Windows vista sending packets to test2: it works, so it preferring the
real v6 address right away
So, the conclusion from this is that we have a problem with v6 only
hosts, since they seem to assume that v4-mapped are not usable when the
v4 stack is down, even when they get this address in a dns AAAA RR.
So according to this, using the v4-mapped address would imply that v6
only nodes would not work and would need updating. If this is the case,
then i think we are better of with the other failure mode i.e. dual
stack nodes sometimes preferring translated connectivity i think, cause
seems a less severe failure and can be avoided by not exposing synthetic
AAAA RR to dual stack hosts (which can be done by having no DNS64
functionality in the NDS server that is serving the dual stack hosts)
(the other possibility is that we got our tests wrong :-) We did this in
a minute, so it may well be wrong.
So this needs host updates in the most popular OSes. Alternatively, we
could depend on the fact that A records result in v4-mapped addresses in
the stack anyway and forego the DNS64 step, but in that case changes are
also necessary because the resolver only provides v4-mapped addresses to
applications when there is IPv4 reachability.
We have left the RR in the server, in case anyone else want to try them
and share the results.
We have additional RR so you can have more fun
All under runningipv6.net:
test1 IN AAAA ::ffff:83.149.65.1
test2 IN AAAA ::ffff:83.149.65.1
IN AAAA 2001:1af8:2:5::2
test3 IN AAAA ::ffff:83.149.65.1
IN A 83.149.65.1
test4 IN AAAA ::ffff:83.149.65.1
IN AAAA 2001:1af8:2:5::2
IN A 83.149.65.1
Regards, Iljitsch and marcelo
_______________________________________________
Int-area mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/int-area