Carlos,

You appear to running ActiveMQ 5.9.0 on Java 6 both of which are quite old.
Trying upgrading to the latest ActiveMQ (5.11.1) with Java 7 or 8 and, as
Tim pointed out, enable the G1GC garbage collector. Once you've done that
remove the -Xms and -Xmn flags. See if that helps.

On the off chance the broker is being asked to handle messages that are
larger than you're expecting add the option
wireFormat.maxFrameSize=<some_byte_value> to the TCP/NIO transport
connector definition in activemq.xml. This will cause the sending of
messages larger than the configured threshold to fail.


Thanks,
Paul

On Fri, Jun 26, 2015 at 10:01 AM, Tim Bain <tb...@alumni.duke.edu> wrote:

> The stack trace you quoted is irrelevant; it's just executors waiting to be
> given work to do.  There are also lots of threads trying to read messages
> from sockets in
> org/apache/activemq/transport/tcp/TcpBufferedInputStream.fill() or waiting
> for a message to be available during a call to
> org/apache/activemq/SimplePriorityMessageDispatchChannel.dequeue(); both of
> those are also irrelevant, because they're just ActiveMQ waiting to be
> given work.
>
> There are two threads waiting for responses to synchronous sends in
> org/apache/activemq/ActiveMQConnection.syncSendPacket().  Those might
> simply be victims of the inability to read messages, or they might be
> relevant to what's going on; it's hard to tell from what you've sent.  One
> thing I'd check based on them (and one thing I'd always check in general,
> so hopefully you've already done this) is whether there are any errors in
> the ActiveMQ broker logs, and specifically whether there are any messages
> about producer flow control kicking in.  Depending on how PFC is
> configured, I believe I've seen at least one JIRA or wiki page describing
> the potential for PFC to cause deadlock when synchronous sends are used by
> preventing the acks from being read.  If you see PFC-related lines in the
> broker logs, we'll go from there; if not, then don't worry about this.
>
> My overall thought, however, is that ActiveMQ (and the Spring JMS library
> you're using) on its own isn't likely to run your client out of memory
> unless your messages are VERY large, because there are limits on how many
> messages will be transferred to your client at any one time.  Plus this
> code has been run by LOTS of people over the years; if it caused OOMs on
> its own, the cause would almost certainly have already been found.  So it's
> most likely that this behavior is caused by something your own code is
> doing, and the most likely guess is that you may be wrongly holding a
> reference to objects that could otherwise be GCed, increasing heap memory
> over time till you eventually run out.  You'll probably want to use tools
> such as JVisualVM to analyze your memory usage and figure out what objects
> are the ones causing it to grow and what's holding a reference to them.
>
> One other possibility is that your algorithm is correct, but processing
> each message is memory-intensive (using over half the heap in total across
> however many messages you're processing in parallel) and so lots of objects
> are getting forced into Old Gen even though they're actually short-lived
> objects, and they are only getting removed from Old Gen via full GCs.  I
> think this is far less likely than the other things I've described, but if
> it's the problem, you could 1) increase the JVM's heap size if possible, 2)
> tweak the percentages allocated to Old Gen and Young Gen to give more to
> Young Gen in the hopes that more things will stay in Young Gen for longer,
> or 3) look into other GC strategies (I'd recommend G1GC, but you appear to
> be on the IBM JVM and I've never used it or researched it so I don't know
> what GC strategies it offers).  But I think you'd really want to prove to
> yourself that this is your problem (i.e. that none of the other things I've
> mentioned are) before you go down this path, because throwing more memory
> at a memory leak doesn't fix it, it just delays it and makes it harder to
> troubleshoot.
>
> Tim
>
> On Fri, Jun 26, 2015 at 1:53 AM, cdelgado <carlos.delg...@proyecti.es>
> wrote:
>
> > Hi all,
> >
> > We're facing an issues that is stopping us for going to production, this
> is
> > a huge blocker for us.
> >
> > The problem is that one of our consumers is hanging (randomly, aparently)
> > and stops consuming messages. From a JMX we can see that is consuming
> > memory
> > and performing quite a lot full GCs.
> >
> > I'm attaching a javacore dump generated sending a kill -3 to the process.
> > There you can see all the details and thread statuses.
> >
> > javacore.txt
> > <http://activemq.2283324.n4.nabble.com/file/n4698204/javacore.txt>
> >
> > Basically, we have 90.7% of the threads waiting on condition, 3.5% Parked
> > and 5.7% Running.
> >
> > The Parked threads have different stacktraces, but generally they end in
> > the
> > same block:
> >
> > *at sun/misc/Unsafe.park(Native Method)
> > at
> >
> >
> java/util/concurrent/locks/LockSupport.parkNanos(LockSupport.java:222(Compiled
> > Code))
> > at
> >
> >
> java/util/concurrent/SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:435(Compiled
> > Code)) *
> > at
> >
> >
> java/util/concurrent/SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:334(Compiled
> > Code))
> > at
> >
> >
> java/util/concurrent/SynchronousQueue.poll(SynchronousQueue.java:885(Compiled
> > Code))
> > at
> >
> >
> java/util/concurrent/ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:966(Compiled
> > Code))
> > at
> >
> >
> java/util/concurrent/ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:928)
> > at java/lang/Thread.run(Thread.java:761)
> >
> > Any *quick* help would be much appreciated, I'm a bit ost here.. :S
> >
> > Carlos
> >
> >
> >
> > --
> > View this message in context:
> >
> http://activemq.2283324.n4.nabble.com/JMS-Client-HANGING-AMQ-5-9-AIX-6-1-tp4698204.html
> > Sent from the ActiveMQ - User mailing list archive at Nabble.com.
> >
>

Reply via email to