JStack.txt <http://activemq.2283324.n4.nabble.com/file/n4675874/JStack.txt> Hi, I am using ActiveMQv5.8. Yesterday I saw some weird issue happening with broker where: ActiveMQ broker running on cluster machine was in hung state and not responding to the client request. It was blocking processes. Also there are lots of connection from a single client (more than 100)
I took jstack on the hung process. Attaching the same with this request. Many of the blocked threads indicate that they were doing some inactivity monitoring related processing. I am seeing lot of warning in my broker log related to Inactivity monitoring: "" 20131214 01:14:37:740 IST (ActiveMQ InactivityMonitor Worker) org.apache.activemq.broker.TransportConnection#serviceTransportException 238 WARN - Transport Connection to: tcp://...:1373 failed: org.apache.activemq.transport.InactivityIOException: Channel was inactive for too (>30000) long: tcp://...:21373 "" My Observations: I suspect that the ActiveMQ broker is not handling momentary cpu spikes very well – it goes into a state where messages get piled up and expensive protocols like closing/reopening all connections are triggered due to heartbeat timeouts from clients. The current heartbeat/inactivity timeout is 30 seconds which may be too low when handing more than 250 clients. Please help me to understand and handle this issue because it is affecting the critical production environment. I don't know how to handle this and what to do but I think I should provide Inactivity timeout period is 120 seconds which is enough time to do the read check. So Are there any side effects of this Inactivity monitoring time increase ? Thanks, Anuj -- View this message in context: http://activemq.2283324.n4.nabble.com/Hung-ActiveMQ-broker-and-processes-are-blocking-tp4675874.html Sent from the ActiveMQ - User mailing list archive at Nabble.com.