[ 
https://issues.apache.org/jira/browse/AMQCPP-510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13763381#comment-13763381
 ] 

Timothy Bish commented on AMQCPP-510:
-------------------------------------

If you'd like to test out the following patch it should resolve the leaks on 
3.8.0.

{noformat}
diff --git a/activemq-cpp/src/main/activemq/core/ConnectionAudit.cpp 
b/activemq-cpp/src/main/activemq/core/ConnectionAudit.cpp
index d3be4ee..b8e3c5e 100644
--- a/activemq-cpp/src/main/activemq/core/ConnectionAudit.cpp
+++ b/activemq-cpp/src/main/activemq/core/ConnectionAudit.cpp
@@ -18,6 +18,7 @@
 #include "ConnectionAudit.h"
 
 #include <decaf/util/LinkedHashMap.h>
+#include <decaf/util/StlMap.h>
 
 #include <activemq/core/Dispatcher.h>
 #include <activemq/core/ActiveMQMessageAudit.h>
@@ -48,10 +49,11 @@
     public:
 
         Mutex mutex;
-        LinkedHashMap<Pointer<ActiveMQDestination>, 
Pointer<ActiveMQMessageAudit> > destinations;
+
+        StlMap<Pointer<ActiveMQDestination>, Pointer<ActiveMQMessageAudit>, 
ActiveMQDestination::COMPARATOR> destinations;
         LinkedHashMap<Dispatcher*, Pointer<ActiveMQMessageAudit> > dispatchers;
 
-        ConnectionAuditImpl() : mutex(), destinations(1000), dispatchers(1000) 
{
+        ConnectionAuditImpl() : mutex(), destinations(), dispatchers(1000) {
         }
     };
 }}

{noformat}
                
> Consumer leaks memory with failover and checkForDuplicates=true
> ---------------------------------------------------------------
>
>                 Key: AMQCPP-510
>                 URL: https://issues.apache.org/jira/browse/AMQCPP-510
>             Project: ActiveMQ C++ Client
>          Issue Type: Bug
>          Components: CMS Impl
>    Affects Versions: 3.7.0, 3.7.1, 3.8.0
>         Environment: Linux x86/64 (ubuntu)
>            Reporter: Sam Parsons
>            Assignee: Timothy Bish
>             Fix For: 3.8.1, 3.9.0
>
>
> The example application (examples/main.cpp) leaks memory in the consumer. 
> To reproduce the problem, add a usleep(100000) after the producer->send, set 
> useTopics = false and numMessages = 2000000.
> With the following url, the example application remains on 4% CPU and 15 meg 
> of memory on my development machine: 
> "failover:(tcp://localhost:61616)?connection.checkForDuplicates=false" 
> Without the "checkForDuplicates=false" the cpu and memory usage grows 
> constantly. I eventually stopped it at 100% CPU and 340 meg of memory. 
> Valgrind suggested that the leak was in ConnectionAudit.cpp:100. I added the 
> following debug: 
> {noformat}
> try { 
>   audit = this->impl->destinations.get(destination); 
> } catch (NoSuchElementException& ex) { 
>   audit.reset(new 
> ActiveMQMessageAudit(auditDepth,auditMaximumProducerNumber)); 
>   this->impl->destinations.put(destination, audit); 
>   std::cout << "New destination audit: " << destination->toString() 
>             << ", size: " << this->impl->destinations.keySet().size() << 
> std::endl; 
> } 
> {noformat}
> ...and it prints... 
> {noformat}
> Sent message #410 from thread 140736021874568 
> New destination audit: queue://TEST.FOO, size: 410 
> Message #410 Received: Hello world! from thread 140736021874568 
> Sent message #411 from thread 140736021874568 
> New destination audit: queue://TEST.FOO, size: 411 
> Message #411 Received: Hello world! from thread 140736021874568 
> {noformat}
> So the size of the destinations map keeps increasing. It seems to think every 
> message has a new destination, but this is not the case. It's just the 
> example code in main.cpp which creates the TEST.FOO destination and sends 
> messages to it in a loop. 
> I tested 3.7.0, 3.7.1 and 3.8.0 and the problem is in all of these versions. 
> 3.4.4 does not have this problem, but that version does not have duplicates 
> detection so that's probably why.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to