As far as I understand the APIs the "new" NIO DatagramChannel, 
SocketChannel can be configured in blocking mode too so why does netty use 
the oldschool DatagramSocket/ServerSocket classes for the OIO backend? 
Wouldn't it be more performant to simpyl use DatagramChannel/SocketChannel 
in blocking mode? Because in the new APIs ByteBuffers can be used 
(especially direct ByteBuffers) instead of byte[] which would lead to 
better performance for the blocking backend. In my opinion the blocking 
backend is important too because it can perform much better in scenarios 
with very few connections (e.g. a udp server, tcp forwarder, ...).
Dunno if there are some details which I don't know about yet which lead to 
the decision to keep using the old socket classes.

Greetings,
Simon

-- 
You received this message because you are subscribed to the Google Groups 
"Netty discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/netty/80d3e303-99a5-4cc7-a6eb-c56e6b6bb30a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to