Hi,
some days ago I posted a question where I was wondering why the RI calls all
their network stuff with AF_INET6 even if the user explicitly wants to do
IPv4-only stuff.

I think I found the reason and am here to ask for going the same route in GNU
Classpath. Let me try to explain the situation:

When a new socket is created the native code will currently call socket(AF_INET,
...) at some place. Usually the user then configures the socket and binds it to
a specific inet address. Not until this binding operation is started we know
whether the user wants an IPv4 socket or an IPv6 one. However the current
implementation has decided this already and it is AF_INET (== IPv4).

Consequently stuff like connecting to IPv6 servers fails.

For multicast sockets the situation is even more worse: If you do a 'new
MulticastSocket()' the socket is created as AF_INET and is bound automatically
to the IPv4 ANY_ADDR. Doing a joinGroup()/leaveGroup() with an IPv6 address is
quite pointless.

What I am proposing is that we do all our native interaction as if the addresses
where IPv6. As IP4v is a subset of IPv6 there should be no problem: Prepend 12
zeros before an IP4v address and you're done.

But what about the platforms that do not natively support IPv6? Bad luck, they
have to use a little hack: Skip the first 12 bytes in a buffer containing an
address and call the system function with AF_INET instead. Additionally they
have to adjust VMPlainSocketImpl's methods to throw some error when someone
passes an Inet6Address to them.

Questions, opinions, suggestions?

cya
Robert

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to