Isn't 36k * 14,000 msgs approx. 512MB? If that's the case, I'm not sure you can recover without losing messages. Maybe you could serialize the overflow to disk?
P.S. - Is this for Vanguard? Chase Seibert Application Developer B U L L H O R N Staffing and Recruiting Software, On Target, On Demand 33-41 Farnsworth Street, 5th Floor, Boston, MA 02210 (w) 617.478.9119 www.bullhorn.com -----Original Message----- From:Eichberger, German [EMAIL PROTECTED] To: "activemq-users@geronimo.apache.org" <activemq-users@geronimo.apache.org>; Sent: Jan 2, 2007 07:06:44 PM Subject: Out of memory with 14K large messages Hi, I am new to activeMQ and I have configured a network of brokers (two servers) with two slow consumers and a durable queue. We are approximately adding 13 messages/second and taking about one out per second. To deal with that asymmetry I have also added some of the performance tips on the producer: connection.setCopyMessageOnSend(false); connection.setOptimizeAcknowledge(true); connection.setUseAsyncSend(true); We are sending 36,173 byte long messages to the system and after about 14,000 we get a java.lang.OutOfMemoryError - activeMQ is running in a tomcat container which has only 512MB heap space. I was wondering if there is any way (besides increasing the heap space) to make activeMQ recover when it runs out of memory or if there is a setting to enable it to run on a small memory footprint. The stack trace looks like: at org.apache.activemq.util.JMSExceptionSupport.create(JMSExceptionSupport. java:46) at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection .java:1191) at org.apache.activemq.AdvisoryConsumer.(AdvisoryConsumer.java:46) at org.apache.activemq.ActiveMQConnection.ensureConnectionInfoSent(ActiveMQ Connection.java:1281) at org.apache.activemq.ActiveMQConnection.start(ActiveMQConnection.java:449 ) and the configuration file: --> "> -->-->--> -->--> --> --> -->I know we are exploring some kind of edge case but we would like to buffer a huge amount of messages during peak hours and catch up with our consumers during non-peak... Thanks, German --- German Eichberger - [EMAIL PROTECTED]