[ 
https://issues.apache.org/jira/browse/LOG4J2-505?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13909161#comment-13909161
 ] 

Remko Popma commented on LOG4J2-505:
------------------------------------

About the Disruptor in general, I am not aware of any issues. Personally I love 
the design and the kind of architecture it affords and I am using the Disruptor 
extensively in a project at work. There is a Google group about the Disruptor 
that you may be interested in: 
https://groups.google.com/forum/#!forum/lmax-disruptor
This is the place to look to see if other people experience problems with the 
Disruptor.

About Async Loggers, I would say that the more threads you have doing logging, 
and the more messages you are logging, the more attractive Async Loggers 
become. AFAIK all AsyncLogger-specific problems have been resolved or have a 
workaround (except LOG4J2-520, which applies to both AsyncLoggers and Async 
Appenders, still need to investigate that).

All I can say furthermore is that if a new issue is found I will try to address 
it as soon as possible.
Does that answer your question?

> Memory leak with 
> org.apache.logging.log4j.core.async.AsyncLoggerConfigHelper$Log4jEventWrapper
> ----------------------------------------------------------------------------------------------
>
>                 Key: LOG4J2-505
>                 URL: https://issues.apache.org/jira/browse/LOG4J2-505
>             Project: Log4j 2
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 2.0-beta9
>            Reporter: Tal Liron
>            Assignee: Remko Popma
>             Fix For: 2.0-beta9
>
>
> Instances of this class seem to be created but never garbage collected. Here 
> is a jmap dump of the problem:
> https://dl.dropboxusercontent.com/u/122806/jvm8_gc2.zip
> Use jhat to analyze it: if you go to the instance count, you will see that 
> the aforementioned class is way out of control.
> Some background on how I discovered this, which may help: I am currently 
> working with the Oracle OpenJDK team to debug a memory leak that has existed 
> with JSR-292 (invokedynamic) that has been present since 7u40, and also 
> plagues OpenJDK 8 right now. The bug is prevalent in the Nashorn engine, 
> which is being shipped with JDK 8. Indeed, in the memory dump above, you'll 
> see that JSR-292 and Nashorn classes are also out of control -- but still 
> second to the log4j class!



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

---------------------------------------------------------------------
To unsubscribe, e-mail: log4j-dev-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-dev-h...@logging.apache.org

Reply via email to