Hello.  We are using netty 4.1.5.Final (through Apache Camel) in order to 
connect to a server via tcp to send about 2Mb/s.  This server only accepts 
one connection at a time, but through "netstat -anp", we see that there are 
multiple connections to the server that show as "ESTABLISHED".  However, 
only one connection results in any throughput of data, while the other 
connections have queues that continue to build up with data.  Specifically, 
we see the queues with between 600,000 and 700,000 bytes of data 
accumulated on average.  Since we are receiving data via a udp socket, we 
need to be able to forward all of our packets in order (or as closely as 
possible!) to the destination server, so allowing data to fill queues and 
back up is a definite show stopper for us.  As helpful as the apache camel 
people are, this is an aspect that they have recommended that we contact 
people who are more familiar with the intricacies of netty to help solve. 
 If anyone is familiar enough with both technologies and can offer a 
solution via camel, then that would be great, but I am also interested in 
hearing about how I could diagnose and solve this issue in terms of netty 
even without camel in the picture.  Is it possible to configure netty to 
only ever attempt to send data through one socket, even if it has a few 
connections available in case the active connection has a problem?  So far, 
I have read a little bit about FixedChannelPool, and I was wondering if 
making use of that will help to solve my problem?

Thanks in advance, and we appreciate any insight that anyone can provide.

Thanks,
Steve

-- 
You received this message because you are subscribed to the Google Groups 
"Netty discussions" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/netty/a82d653d-1971-4eee-ac1f-e41eaddcb8a1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to