[ 
https://issues.apache.org/jira/browse/ARTEMIS-1559?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16383670#comment-16383670
 ] 

Justin Bertram commented on ARTEMIS-1559:
-----------------------------------------

Instructions should probably include steps about:
* what GitHub repo to clone
* how to build Corda (which is required to run the test)
* what process to attach the profiler to (I see 3 related to Kotlin, 3 related 
to Gradle, and 1 related to Corda)
* expected console output from the test (I see a 
{{ActiveMQConnectionTimedOutException}}; is that expected?)

At the end of the day the Gradle/Kotlin/Corda layers makes diagnosing the 
underlying issue fairly difficult.  Ideally I'd like to have a simple Java 
reproducer running against a bare Artemis broker and nothing more.  I just 
don't have a lot of time to spend digging into all the surrounding layers given 
competing priorities.  You guys obviously understand how Artemis works.  Could 
you just rip out the essential core client logic that leads to this condition 
and slap it into a modified version of the "ssl-enabled" example?

> Repeated retries of unencrypted traffic to SSL-enabled broker causes OOM 
> exception
> ----------------------------------------------------------------------------------
>
>                 Key: ARTEMIS-1559
>                 URL: https://issues.apache.org/jira/browse/ARTEMIS-1559
>             Project: ActiveMQ Artemis
>          Issue Type: Bug
>    Affects Versions: 2.3.0, 2.4.0
>            Reporter: Tommy Lillehagen
>            Priority: Major
>              Labels: memory-leak
>         Attachments: mem-leak-source.png, mem-leak.png
>
>
> Repro steps:
>  * We have a broker (A) set up with SSL enabled (using Artemis v2.4 / v2.3)
>  * A client (B) set up to use plain (non-TLS) communication (same version of 
> Artemis)
>  * Trying to establish a connection between (B) and (A) triggers multiple 
> retries
>  * Each message gets, from what I can tell, rejected quickly by (A), but each 
> iteration leaks heap memory
>  * Due to the number of retries in a short amount a time (~1000s from what I 
> can tell) causes the above to grow the heap by 470M or so within less than 10 
> seconds (the set timeout)
>  * This consequently results in an out of memory exception
> The above behaviour is observed in both version 2.3 and 2.4.
> We've tested older versions (2.1 and 2.2), and neither of those manifest the 
> same problem.
> I've run some profiling on (A), and {{NettyAcceptor.initChannel}} 
> ({{getSslHandler()}}) seems to be a critical point (will include screenshots).
> That being said, most of the accumulated heap memory seems to be claimable 
> and is mostly collected during the next GC cycle in the tests that I've run.
> Source: https://github.com/corda/corda/pull/2252



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to