Hi Mauro,

hey your bringing out important stuff here that needs to be understood.
thanks...it is just so busy before an ietf meeting on the lists...

>> >you are absolutely right. my concern was about api issues. a modification
>> >in the behaviour of af_inet6 passive socket, so that they are not allowed
>> >to accept connections from af_inet sockets, would have imho nightmarish
   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

>sorry, my english is poor - let me explain better. suppose we have an ipv6
>passive socket waiting for an incoming connection. it must be able to
>recognise requests from ipv4 nodes and present them to the "accept" 
>syscall as af_inet6 socket with an ipv4-mapped address.
>imho this behaviour, which is specified in rfc2553, should not be modified.

Great.  Your english is fine.  Mine has been bad here because I am so
busy I am typing in terse mode which I have stopped now, on this trhead
anyway.  Sorry.

>> An af_inet6 socket should not accept a connection for an af_inet socket.  
>
>you are right. they should open another af_inet socket and fall back to
>an ipv4 connection. however, rfc2553 does not even suggest this behaviour.
>quoting from draft-ietf-ipngwg-rfc2553bis-00.txt, section 3.7:

OK I explained this in mail last night my "for" statement implied to much
for any reader.  Assume you have that mail?

>- Applications may use PF_INET6 sockets to open TCP connections to IPv4
>- nodes, or send UDP packets to IPv4 nodes, by simply encoding the
>- destination's IPv4 address as an IPv4-mapped IPv6 address, and passing
>- that address, within a sockaddr_in6 structure, in the connect() or
>- sendto() call.  

>When applications use PF_INET6 sockets to accept TCP
>- connections from IPv4 nodes, or receive UDP packets from IPv4 nodes, the
>- system returns the peer's address to the application in the accept(),
>- recvfrom(), or getpeername() call using a sockaddr_in6 structure encoded
>- this way. 
>-
>- Few applications will likely need to know which type of node they are
>- interoperating with.  However, for those applications that do need to
>- know, the IN6_IS_ADDR_V4MAPPED() macro, defined in Section 6.7, is
>- provided.
>

>let me understand, when an af_inet6 socket opens a connection with 
>another af_inet6 socket with ipv4-mapped address, the communication
>established is in ipv4, isn't it? so ipv4-mapped addresses are not only
>used for node representation (as they are returned from getaddrinfo and
>getipnodebyname), but also to establish a connection to an ipv4 host.

This is permissible but not required.  The API can ask for just plain
old IPv4 addresses too using af_inet with the API.  How thats
implemented in the stack is none of the standards groups business.

>so the only protocol that requires ipv4-mapped addresses "on the wire"
>is SIIT. if SIIT is not used, then the kernel can reject all connection
>from outside with an ipv4-mapped address, for security issues - like
>itojun has explained us very well.

This should be permissable and I think we need to add a socket level
option for af_inet6 that states do not accept v4mapped connections, for
af_inet6 listeners.

This would be reflected in the next iteration (hopefully last call) 
in rfc2553bis. 

>by the way, can you point me to a rfc which explains the difference
>between a hybrid stack and a dual stack? i have not read them all - maybe
>i have missed an important one.

Hmmm.  There really is no rfc for this but you might want to do a search
on "hybrid stacks" on the web.  You can look at rfc1006, rfc1001/1002
which is very very old but is the key point of a hybrid stack.

Roughly let me try to explain without going off the deep end of
networking computer science derivations of how to build network stacks.

A dual stack typically means that each stack is fully implemented
completely for a networking protocol.  So for example using IP (can
apply to any network protocol) using BSD as a model.  There would be
ip_input4.c and ip_input6.c and tcp_userreq4.c and tcpuserreq6.c. I
don't think anyone has done anything like that I hope as that is pure
kernel bloat for the networking subsystem and to much duplication at
least for a product focused IP stack.

Another approach is to build a dual IP layer, common transport layer,
and a dual API layer, but the IPv6 API layer can handle both v4 and v6
(like DNS as an example).  This means that after the IP layer all IP
datastructures are 16bytes, hence, IPv4 is represented as v4mapped.
A hybrid stack integrates those parts of the stack where Ipv4 and Ipv6
can perform code reuse, avoid duplication of datastructures, but at the
same time avoid excessive conditional primitives within the stack
implementation.  For Ipv4 and IPv6 many of us were able to do this
(probably in different ways) and maintain the performance and
compatibility of IPv4 binaries.  To keep performance is a lot of work
but the integration of Ipv4 and IPv6 is worth the work from a "product
perspective".

What we have done in the IETF is use the term Dual Stack in most specs
to mean that an implementation can support both IPv4 and IPv6.  But most
of us have not built pure dual stacks but hybrid stacks.  If you look at
the basic transition mechanisms from Erik Nordmark and Bob Gilligan you
will see reference to "dual IP layer" draft-ietf-ngtrans-mech-06.txt
and I think far more appropriate than saying dual stack.  Even in one of
my co-author'd specs I am guilty of abusing dual stack 
draft-ietf-ngtrans-dstm-02.txt.  But we live with the misnomer in IPng.
It would be to much work and discussion to fix it.  But it always raises
its ugly head when we discuss the implementation of the APIs.

Also one of the advantages of IPv6 when it was first presented as an
IPng within the IETF was that we would be able to build a clean hybrid
IPv4+IPv6 stack, which in opinion would not have been possible with the
competing proposals back in 1993-1994 for IPng in the IETF.

You can also look at something we did disclosing our IPv6 Prototype
implementation details for a good discussion of what I allude to above,
but realize the paper is now old and whether we update it is TBD but it
is a good point of reference for you and open to the public to view.

http://www.digital.com/DTJN01/DTJN01HM.HTM

regards,
/jim

--------------------------------------------------------------------
IETF IPng Working Group Mailing List
IPng Home Page:                      http://playground.sun.com/ipng
FTP archive:                      ftp://playground.sun.com/pub/ipng
Direct all administrative requests to [EMAIL PROTECTED]
--------------------------------------------------------------------

Reply via email to