I changed the server and handler as follows which resulting in the tcp window
size increasing starting from 5840 to 63924 (by 1072 each time) and stays on
63924. But the TCP dump (viewed on wireshark) shows bad checksum errors on
all the messages sends out from our server (which is running on VMware).
Client did receive all the messages correctly though.
Should I set the ReceiveBufferSize() explicitly? If so what would be an
ideal size? The size of maximum size of a xml message is less than 2048
bytes. TCP nodelay is set to false.
Server
ByteBuffer.setUseDirectBuffers(false);
ByteBuffer.setAllocator(new SimpleByteBufferAllocator());
IoAcceptor acceptor = new SocketAcceptor();
// Limits the session receive buffer size to 65535 bytes
int receiveBufferSize =
((SocketAcceptorConfig)acceptor.getDefaultConfig()).getSessionConfig().getReceiveBufferSize();
logger.info("{}",
"(SocketAcceptor().getDefaultConfig()).getReceiveBufferSize()=" +
receiveBufferSize);
if (receiveBufferSize > 65535)
{
logger.debug("{}", "Setting the session.ReceiveBufferSize to "
+65535);
((SocketAcceptorConfig)acceptor.getDefaultConfig()).getSessionConfig().setReceiveBufferSize(65535);
}
boolean tcpNoDelay =
((SocketAcceptorConfig)acceptor.getDefaultConfig()).getSessionConfig().isTcpNoDelay();
logger.info("{}", "tcpNoDelay="+tcpNoDelay);
XmlExchangeServerIoHandler handler = new
XmlExchangeServerIoHandler();
handler.setIoAcceptor(acceptor);
acceptor.bind(new InetSocketAddress(23000), handler);
Handler
public void sessionCreated(IoSession session) throws Exception
{
session.setIdleTime( IdleStatus.BOTH_IDLE, 10 );
session.getFilterChain().addLast("protocolFilter", new
ProtocolCodecFilter(new XmlCodecFactory(false)));
session.setAttribute("XML_MESSAGE_OBJECT", new XmlMessage());
}
--
View this message in context:
http://www.nabble.com/TCP-window-size-decrease-results-in-one-byte-packet-transmission-tp23658893p23672172.html
Sent from the Apache MINA User Forum mailing list archive at Nabble.com.