[ 
https://issues.apache.org/jira/browse/LOG4J2-327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sebastian Oerding updated LOG4J2-327:
-------------------------------------

    Description: 
Hello,

we encountered a severe performance loss with log4j 2. Unfortunately we have no 
exact error description such as a stacktrace but I'll give it may best shot.

We are developing an application working with mass data. During the development 
we are running tests such as importing 15,000 files via FTP or creating 500,000 
business objects and workiong with them.

Switching from log4j 1.2.17 to log4j 2 we ran into really severe performance 
issues.

With the beta7 we even get an OutOfMemoryError. Using a profiler we see that 
there was a huge amount of string objects (that were log messages according to 
a control sample).
Switching to the beta8 the profiler shows there are only 10 % of the string 
objects left that we had before but it still was a big number. Especially the 
strings live for a really long time (> 100 generations).

Using the RollingFileAppender and ConsoleAppender, (not a FlumeAppender for 
which a memory has been fixed in beta8) and default garbage collection we had 
the following behaviour with beta8:

Starting the application (each time exactly the same test scenario) the machine 
fastly reaches the memory limit given for the JVM. Afterwards periodically 
garbage collection is invoked, causing a long "Stop the World Phase"
(8 minutes in another scenario), freeing memory down a "base level" and the 
application continues until the memory is full again. The base level remains 
the same for next invocations of the garbage collection.

This may relate to tickets: https://issues.apache.org/jira/browse/LOG4J2-323, 
https://issues.apache.org/jira/browse/LOG4J2-322, 
https://issues.apache.org/jira/browse/LOG4J2-304.

Furthermore when looking into one string object being not GCed (372 KB) I 
noticed that it contained a lot of log messages being concatenated. The 
application contains no log statement that logs such a big message. 
Looking at the string towards the end I found even complete log events 
including the log level and so on not only the message. This feels being 
related to https://issues.apache.org/jira/browse/LOG4J2-290, somehow log events 
/
strings get concatenated / corrupted.

Maybe it is even a problem with the ParameterizedMessage which we use a lot of.

Switching to the AsyncAppender (FastRollingFile) asynhronously the application 
performs better than with the RollingFile but still bad. Running the same 
scenario with the FastRollingFile immediateFlusd and append parameters ommitted 
the FastRolling crashed due to a IllegalArgumentExeption in class ByteBuffer 
(stacktrace is lost).

  was:
Hello,

we encountered a severe performance loss with log4j 2. Unfortunately we have no 
exact error description such as a stacktrace but I'll give it may best shot.

We are developing an application working with mass data. During the development 
we are running tests such as importing 15,000 files via FTP or creating 500,000 
business objects and workiong with them.

Switching from log4j 1.2.17 to log4j 2 we ran into really severe performance 
issues.

With the beta7 we even get an OutOfMemoryError. Using a profiler we see that 
there was a huge amount of string objects (that were log messages according to 
a control sample).
Switching to the beta8 the profiler shows there are only 10 % of the string 
objects left that we had before but it still was a big number. Especially the 
strings live for a really long time (> 100 generations).

Using the RollingFileAppender and ConsoleAppender, (not a FlumeAppender for 
which a memory has been fixed in beta8) and default garbage collection we had 
the following behaviour with beta8:

Starting the application (each time exactly the same test scenario) the machine 
fastly reaches the memory limit given for the JVM. Afterwards periodically 
garbage collection is invoked, causing a long "Stop the World Phase"
(8 minutes in another scenario), freeing memory down a "base level" and the 
application continues until the memory is full again. The base level remains 
the same for next invocations of the garbage collection.

This may relate to tickets: https://issues.apache.org/jira/browse/LOG4J2-323, 
https://issues.apache.org/jira/browse/LOG4J2-322, 
https://issues.apache.org/jira/browse/LOG4J2-304.

Furthermore when looking into one string object being not GCed (372 KB) I 
noticed that it contained a lot of log messages being concatenated. The 
application contains no log statement that logs such a big message. 
Looking at the string towards the end I found even complete log events 
including the log level and so on not only the message. This feels being 
related to https://issues.apache.org/jira/browse/LOG4J2-290, somehow log events 
/
strings get concatenated / corrupted.

Maybe it is even a problem with the ParameterizedMessage which we use a lot of.

    
> Log4j 2 drastical performance loss
> ----------------------------------
>
>                 Key: LOG4J2-327
>                 URL: https://issues.apache.org/jira/browse/LOG4J2-327
>             Project: Log4j 2
>          Issue Type: Bug
>          Components: Appenders, Core, Layouts
>    Affects Versions: 2.0-beta7, 2.0-beta8
>         Environment: Windows 7 64 bit, JDk 1.6.0_43
>            Reporter: Sebastian Oerding
>            Priority: Blocker
>              Labels: performance
>
> Hello,
> we encountered a severe performance loss with log4j 2. Unfortunately we have 
> no exact error description such as a stacktrace but I'll give it may best 
> shot.
> We are developing an application working with mass data. During the 
> development we are running tests such as importing 15,000 files via FTP or 
> creating 500,000 business objects and workiong with them.
> Switching from log4j 1.2.17 to log4j 2 we ran into really severe performance 
> issues.
> With the beta7 we even get an OutOfMemoryError. Using a profiler we see that 
> there was a huge amount of string objects (that were log messages according 
> to a control sample).
> Switching to the beta8 the profiler shows there are only 10 % of the string 
> objects left that we had before but it still was a big number. Especially the 
> strings live for a really long time (> 100 generations).
> Using the RollingFileAppender and ConsoleAppender, (not a FlumeAppender for 
> which a memory has been fixed in beta8) and default garbage collection we had 
> the following behaviour with beta8:
> Starting the application (each time exactly the same test scenario) the 
> machine fastly reaches the memory limit given for the JVM. Afterwards 
> periodically garbage collection is invoked, causing a long "Stop the World 
> Phase"
> (8 minutes in another scenario), freeing memory down a "base level" and the 
> application continues until the memory is full again. The base level remains 
> the same for next invocations of the garbage collection.
> This may relate to tickets: https://issues.apache.org/jira/browse/LOG4J2-323, 
> https://issues.apache.org/jira/browse/LOG4J2-322, 
> https://issues.apache.org/jira/browse/LOG4J2-304.
> Furthermore when looking into one string object being not GCed (372 KB) I 
> noticed that it contained a lot of log messages being concatenated. The 
> application contains no log statement that logs such a big message. 
> Looking at the string towards the end I found even complete log events 
> including the log level and so on not only the message. This feels being 
> related to https://issues.apache.org/jira/browse/LOG4J2-290, somehow log 
> events /
> strings get concatenated / corrupted.
> Maybe it is even a problem with the ParameterizedMessage which we use a lot 
> of.
> Switching to the AsyncAppender (FastRollingFile) asynhronously the 
> application performs better than with the RollingFile but still bad. Running 
> the same scenario with the FastRollingFile immediateFlusd and append 
> parameters ommitted the FastRolling crashed due to a IllegalArgumentExeption 
> in class ByteBuffer (stacktrace is lost).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to