Itojun,

> 
> >     This is what our implementation does.  Not too surprisingly the
> >behavior below doesn't match what I said our implementation did.  It is
> >quite a bit more restrictive than I had remembered.
> 
> >wild4 then wild6
> >     bind socket for 0.0.0.0/8888
> >     bind socket for ::/8888
> >     failed bind for ::/8888, Address already in use
> 
> >wild6 then wild4
> >     bind socket for ::/8888
> >     bind socket for 0.0.0.0/8888
> >     failed bind for 0.0.0.0/8888, Address already in use
> 
>       thanks, doesn't the "wild4 then wild6" entry cause some problem
>       for you?
>

It could but to this point it hasn't caused problems for simple inetd
type applications like telnet and ftp.

In particular the errors above could be avoided by simply using SO_REUSEADDR
on the second socket before the second call to bind.

That is what thing that is missing from your test program.  SO_REUSEADDR
and SO_REUSEPORT settings.

I suspect that we will need to change our implementation to be more forgiving
regarding the above tests and probably others as well.

>       actually, if we have bind(2) ordering constraint in the kernel,
>       the order of return value from getaddrinfo(3) has to be carefully
>       designed - in your case, I bet you need to return AF_INET6 first on
>       AI_PASSIVE getaddrinfo lookup.
> 

I am not sure what you mean here exactly.  Since the above failures occur
regardless of order I don't see why it would make any difference in which
order the addresses were returned from getaddrinfo.

In this case I was running our stack on Linux and using the getaddrinfo
which comes with Redhat 6.1 (glibc 2.0.x).



tim
--------------------------------------------------------------------
IETF IPng Working Group Mailing List
IPng Home Page:                      http://playground.sun.com/ipng
FTP archive:                      ftp://playground.sun.com/pub/ipng
Direct all administrative requests to [EMAIL PROTECTED]
--------------------------------------------------------------------

Reply via email to