On Mon, Mar 31, 2008 at 3:38 AM, Ronald Klop <[EMAIL PROTECTED]> wrote: > > See my previous mail about send/receive buffers filling because Ack wasn't > read by FastAsyncSender. > The option waitForAck="true" did the trick for me. But for FastAsyncSender > you should set sendAck="false" on the receiving side.
Thanks for the information, Ronald. Can you clarify your settings by by posting a minimal configuration? I looked for the option sendAck on the Tomcat cluster page and couldn't find any reference to that configuration parameter: http://tomcat.apache.org/tomcat-5.5-doc/cluster-howto.html It looks like doing something like one of the two is a good idea for a barebones setup to make sure that the acking behavior is consistent since Tomcat doesn't seem to ensure that they are sane: <Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster" receiver.sendAck="true" sender.waitForAck="true"/> <Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster" receiver.sendAck="false" sender.waitForAck="false"/> I'm a bit confused as to why this issue only affects one of my clusters (out of 3 production clusters with identical setups) and not more people are seeing it. Are most people specifying their Ack settings? Or do most people not see enough traffic between restarts to trigger this issue? Granted, the one that's affected also happens to handle the most traffic by far. I'll have to do more testing on my test cluster to verify (I've already turned on waitForAck everywhere in production), hopefully I can reproduce it. Anyone have information on how using Acks in the cluster affects performance? -Dave --------------------------------------------------------------------- To start a new topic, e-mail: users@tomcat.apache.org To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]