Re: CPU Utilization: log4j 2.6 and great

2017-02-18 Thread DiFrango, Ronald
Ralph,

Agreed, and we didn’t realize how sensitive we were to logging until this 
release, but as I outlined below it forced us to think about our config, etc.

To your point, our logging is NOT mission critical so ASYNC makes a ton of 
sense plus we weren’t using any location, etc. that actually cause ASYNC to be 
slower.  Our perf records are used in our dashboards, but we’ve started to 
switch over to Drop Wizard metrics to lessen our reliance upon logging.

Thanks,

Ron DiFrango

On 2/18/17, 4:43 PM, "Apache" <ralph.go...@dslextreme.com> wrote:

Yes, it makes sense, and it proves the point that everyone should test 
their own configuration. RollingRandomAccessFile is usually faster but isn’t 
always. Using async logging will be a win if it is OK for the application not 
to know if a log message fails for some reason, and if you aren’t maxing out on 
disk I/o.

Ralph

> On Feb 18, 2017, at 2:09 PM, DiFrango, Ronald 
<ronald.difra...@capitalone.com> wrote:
> 
> Laurent,
> 
> We had several problems:
> 
> 1) We had two different roll-over strategies, 
> a. One being time based with no time specified, therefore the file rolled 
over every hour
> b. One being size based, which never triggered
> c. We switched to just the size based trigger
> 2) We had 7+ appenders that were defined when only 4 were needed
> 3) We didn’t have our logging level consistently set to our production 
standard of WARN
> 
> After correcting those issues, we switched over to:
> 
> 1) Switched over to full async logging
> 2) Switched RollingRandomAccessFile as its supposed to be the most 
efficient for async logging.
> 
> Make sense?
> 
> Ron DiFrango
> 
> 
> On 2/18/17, 3:32 PM, "l...@capsicohealth.com" <l...@capsicohealth.com> 
wrote:
> 
>What does it mean "make your xml config efficient"?
> 
> 
>Laurent Hasson
>Co-Founder & CTO, CapsicoHealth Inc.
>Sent from my BlackBerry Passport.
> 
>Research has shown that capsaicin—the substance that gives hot red 
peppers (or chilies) their kick, and boosts our metabolism—keeps immature fat 
cells from developing into full-fledged ones. And a study presented in April 
found that a compound in some sweet peppers (called CH-19 Sweet), which 
resembles capsaicin, provides similar positive metabolic effects
> 
> 
>  Original Message  
>    From: DiFrango, Ronald
>    Sent: Saturday, February 18, 2017 15:24
>To: Log4J Users List
>Reply To: Log4J Users List
>Subject: Re: CPU Utilization: log4j 2.6 and great
> 
>Ralph,
> 
>Sorry, we just figured it all out between Thursday and Friday, 
basically it came down to a combination of both version 2.6.2 and our own code 
☹ Basically, if we upgrade to 2.7 and above, the utilization was still a little 
higher, but not nearly as bad as 2.6.2. Plus, like I said, we had some 
inefficient code that high logging rates, that made matters.
> 
>Basically, we found that 2.6, 2.7, and 2.8 were approximately the same 
CPU load but 2.6.2 wasn’t great because of the threading issue, but like you 
said it spun them off.
> 
>We also, found some inefficiencies in our log4j2.xml file that caused 
more roll-overs to happen which made matters worse with 2.6.2.
> 
>So the net was:
> 
>1) Some bad code on our part
>2) Upgrade to 2.7 and beyond
>3) Ensure your log4j2.xml file is efficient
> 
>Thanks for the sounding board. 
> 
>Ron DiFrango
> 
> 
>On 2/18/17, 3:01 PM, "Apache" <ralph.go...@dslextreme.com> wrote:
> 
>Any news on this?
> 
>Ralph
> 
>> On Feb 13, 2017, at 5:00 PM, DiFrango, Ronald 
<ronald.difra...@capitalone.com> wrote:
>> 
>> We’re going to try and pull that for you as we’re still working through 
was is going on.
>> 
>> One question, though what would you use in place of the 
DefaultRolloverStrategy and max?
>> 
>> Ron DiFrango
>> 
>> On 2/13/17, 6:43 PM, "Remko Popma" <remko.po...@gmail.com> wrote:
>> 
>> I've seen cases where the "max" in  caused issues because the directory contained many files 
that all needed to be scanned to see if they matched old log files to delete. 
>> 
>> That said, it would be good if you can share the results of jstack and 
do a run with Java Flight Recorder. 
>> 
>> Sent from my iPhone
>> 
>>> On Feb

Re: CPU Utilization: log4j 2.6 and great

2017-02-18 Thread DiFrango, Ronald
Laurent,

We had several problems:

1) We had two different roll-over strategies, 
a. One being time based with no time specified, therefore the file rolled over 
every hour
b. One being size based, which never triggered
c. We switched to just the size based trigger
2) We had 7+ appenders that were defined when only 4 were needed
3) We didn’t have our logging level consistently set to our production standard 
of WARN

After correcting those issues, we switched over to:

1) Switched over to full async logging
2) Switched RollingRandomAccessFile as its supposed to be the most efficient 
for async logging.

Make sense?

Ron DiFrango


On 2/18/17, 3:32 PM, "l...@capsicohealth.com" <l...@capsicohealth.com> wrote:

What does it mean "make your xml config efficient"?


Laurent Hasson
Co-Founder & CTO, CapsicoHealth Inc.
Sent from my BlackBerry Passport.

Research has shown that capsaicin—the substance that gives hot red peppers 
(or chilies) their kick, and boosts our metabolism—keeps immature fat cells 
from developing into full-fledged ones. And a study presented in April found 
that a compound in some sweet peppers (called CH-19 Sweet), which resembles 
capsaicin, provides similar positive metabolic effects


  Original Message  
From: DiFrango, Ronald
Sent: Saturday, February 18, 2017 15:24
To: Log4J Users List
Reply To: Log4J Users List
    Subject: Re: CPU Utilization: log4j 2.6 and great

Ralph,

Sorry, we just figured it all out between Thursday and Friday, basically it 
came down to a combination of both version 2.6.2 and our own code ☹ Basically, 
if we upgrade to 2.7 and above, the utilization was still a little higher, but 
not nearly as bad as 2.6.2. Plus, like I said, we had some inefficient code 
that high logging rates, that made matters.

Basically, we found that 2.6, 2.7, and 2.8 were approximately the same CPU 
load but 2.6.2 wasn’t great because of the threading issue, but like you said 
it spun them off.

We also, found some inefficiencies in our log4j2.xml file that caused more 
roll-overs to happen which made matters worse with 2.6.2.

So the net was:

1) Some bad code on our part
2) Upgrade to 2.7 and beyond
3) Ensure your log4j2.xml file is efficient

Thanks for the sounding board. 

Ron DiFrango


On 2/18/17, 3:01 PM, "Apache" <ralph.go...@dslextreme.com> wrote:

Any news on this?

Ralph

> On Feb 13, 2017, at 5:00 PM, DiFrango, Ronald 
<ronald.difra...@capitalone.com> wrote:
> 
> We’re going to try and pull that for you as we’re still working through 
was is going on.
> 
> One question, though what would you use in place of the 
DefaultRolloverStrategy and max?
> 
> Ron DiFrango
> 
> On 2/13/17, 6:43 PM, "Remko Popma" <remko.po...@gmail.com> wrote:
> 
> I've seen cases where the "max" in  caused issues because the directory contained many files 
that all needed to be scanned to see if they matched old log files to delete. 
> 
> That said, it would be good if you can share the results of jstack and do 
a run with Java Flight Recorder. 
> 
> Sent from my iPhone
> 
>> On Feb 14, 2017, at 8:28, Apache <ralph.go...@dslextreme.com> wrote:
>> 
>> If you are seeing lots of threads can you do a jstack on the process so 
we can see what they are?
>> 
>> Ralph
>> 
>>> On Feb 13, 2017, at 3:50 PM, DiFrango, Ronald 
<ronald.difra...@capitalone.com> wrote:
>>> 
>>> Our policy generally looks like the following where max log files is 2 
and size is 10MB and we 6 appenders configured for various use cases, though 
the default is WARN so other than our performance logs, nothing is written to 
at high volume:
>>> 
>>> >> filePattern="logs/$${date:-MM}/app-%d{MM-dd-}-%i.log.gz" 
immediateFlush="false" append="true">
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Ron DiFrango
>>> 
>>> On 2/13/17, 3:25 PM, "Matt Sicker" <boa...@gmail.com> wrote:
>>> 
>>> If lots of data are being logged really quickly, then I could see that
>>> happening.
>>> 
>>>> On 13 February 2017 at 14:20, Apache <ralph.go...@dslextreme.com> 
wrote:
>>>> 
>>>> The only threads in Log4j that I can think of that are short lived are 
the
>>>> threads that compress the files at rollover. That would only consume a 
lot
>>>

Re: CPU Utilization: log4j 2.6 and great

2017-02-18 Thread ldh
What does it mean "make your xml config efficient"?


Laurent Hasson
Co-Founder & CTO, CapsicoHealth Inc.
Sent from my BlackBerry Passport.

Research has shown that capsaicin—the substance that gives hot red peppers (or 
chilies) their kick, and boosts our metabolism—keeps immature fat cells from 
developing into full-fledged ones. And a study presented in April found that a 
compound in some sweet peppers (called CH-19 Sweet), which resembles capsaicin, 
provides similar positive metabolic effects


  Original Message  
From: DiFrango, Ronald
Sent: Saturday, February 18, 2017 15:24
To: Log4J Users List
Reply To: Log4J Users List
Subject: Re: CPU Utilization: log4j 2.6 and great

Ralph,

Sorry, we just figured it all out between Thursday and Friday, basically it 
came down to a combination of both version 2.6.2 and our own code ☹ Basically, 
if we upgrade to 2.7 and above, the utilization was still a little higher, but 
not nearly as bad as 2.6.2. Plus, like I said, we had some inefficient code 
that high logging rates, that made matters.

Basically, we found that 2.6, 2.7, and 2.8 were approximately the same CPU load 
but 2.6.2 wasn’t great because of the threading issue, but like you said it 
spun them off.

We also, found some inefficiencies in our log4j2.xml file that caused more 
roll-overs to happen which made matters worse with 2.6.2.

So the net was:

1) Some bad code on our part
2) Upgrade to 2.7 and beyond
3) Ensure your log4j2.xml file is efficient

Thanks for the sounding board. 

Ron DiFrango


On 2/18/17, 3:01 PM, "Apache" <ralph.go...@dslextreme.com> wrote:

Any news on this?

Ralph

> On Feb 13, 2017, at 5:00 PM, DiFrango, Ronald 
> <ronald.difra...@capitalone.com> wrote:
> 
> We’re going to try and pull that for you as we’re still working through was 
> is going on.
> 
> One question, though what would you use in place of the 
> DefaultRolloverStrategy and max?
> 
> Ron DiFrango
> 
> On 2/13/17, 6:43 PM, "Remko Popma" <remko.po...@gmail.com> wrote:
> 
> I've seen cases where the "max" in  max="${maxLogFiles}"/> caused issues because the directory contained many 
> files that all needed to be scanned to see if they matched old log files to 
> delete. 
> 
> That said, it would be good if you can share the results of jstack and do a 
> run with Java Flight Recorder. 
> 
> Sent from my iPhone
> 
>> On Feb 14, 2017, at 8:28, Apache <ralph.go...@dslextreme.com> wrote:
>> 
>> If you are seeing lots of threads can you do a jstack on the process so we 
>> can see what they are?
>> 
>> Ralph
>> 
>>> On Feb 13, 2017, at 3:50 PM, DiFrango, Ronald 
>>> <ronald.difra...@capitalone.com> wrote:
>>> 
>>> Our policy generally looks like the following where max log files is 2 and 
>>> size is 10MB and we 6 appenders configured for various use cases, though 
>>> the default is WARN so other than our performance logs, nothing is written 
>>> to at high volume:
>>> 
>>> >> fileName="${base.log.dir}/app.log"
>>> filePattern="logs/$${date:-MM}/app-%d{MM-dd-}-%i.log.gz" 
>>> immediateFlush="false" append="true">
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Ron DiFrango
>>> 
>>> On 2/13/17, 3:25 PM, "Matt Sicker" <boa...@gmail.com> wrote:
>>> 
>>> If lots of data are being logged really quickly, then I could see that
>>> happening.
>>> 
>>>> On 13 February 2017 at 14:20, Apache <ralph.go...@dslextreme.com> wrote:
>>>> 
>>>> The only threads in Log4j that I can think of that are short lived are the
>>>> threads that compress the files at rollover. That would only consume a lot
>>>> of CPU if rollover is happening a lot.
>>>> 
>>>> Ralph
>>>> 
>>>>> On Feb 13, 2017, at 1:12 PM, Christopher Schultz <
>>>> ch...@christopherschultz.net> wrote:
>>>>> 
>>>>> Ronald,
>>>>> 
>>>>>> On 2/13/17 2:29 PM, DiFrango, Ronald wrote:
>>>>>> Yeah, I’m aware and agreed, but this is done in the Apache HTTP Client
>>>> Component’s Code, not ours and existed as-is with log4j2 in ALL the
>>>> versions we’ve been using.
>>>>>> 
>>>>>> That said, I think this is a red herring and it’s still true that at
>>>> least for our application any version of log4j2 beyond 2.5 consumes more
>>>> CPU that 2.5 this is a REST based APi using Tomcat+Jersey+Spring [ag

Re: CPU Utilization: log4j 2.6 and great

2017-02-18 Thread DiFrango, Ronald
Ralph,

Sorry, we just figured it all out between Thursday and Friday, basically it 
came down to a combination of both version 2.6.2 and our own code ☹  Basically, 
if we upgrade to 2.7 and above, the utilization was still a little higher, but 
not nearly as bad as 2.6.2.  Plus, like I said, we had some inefficient code 
that high logging rates, that made matters.

Basically, we found that 2.6, 2.7, and 2.8 were approximately the same CPU load 
but 2.6.2 wasn’t great because of the threading issue, but like you said it 
spun them off.

We also, found some inefficiencies in our log4j2.xml file that caused more 
roll-overs to happen which made matters worse with 2.6.2.

So the net was:

1) Some bad code on our part
2) Upgrade to 2.7 and beyond
3) Ensure your log4j2.xml file is efficient

Thanks for the sounding board. 

Ron DiFrango


On 2/18/17, 3:01 PM, "Apache"  wrote:

Any news on this?

Ralph

> On Feb 13, 2017, at 5:00 PM, DiFrango, Ronald 
 wrote:
> 
> We’re going to try and pull that for you as we’re still working through 
was is going on.
> 
> One question, though what would you use in place of the 
DefaultRolloverStrategy and max?
> 
> Ron DiFrango
> 
> On 2/13/17, 6:43 PM, "Remko Popma"  wrote:
> 
>I've seen cases where the "max" in  caused issues because the directory contained many files 
that all needed to be scanned to see if they matched old log files to delete. 
> 
>That said, it would be good if you can share the results of jstack and 
do a run with Java Flight Recorder. 
> 
>Sent from my iPhone
> 
>> On Feb 14, 2017, at 8:28, Apache  wrote:
>> 
>> If you are seeing lots of threads can you do a jstack on the process so 
we can see what they are?
>> 
>> Ralph
>> 
>>> On Feb 13, 2017, at 3:50 PM, DiFrango, Ronald 
 wrote:
>>> 
>>> Our policy generally looks like the following where max log files is 2 
and size is 10MB and we 6 appenders configured for various use cases, though 
the default is WARN so other than our performance logs, nothing is written to 
at high volume:
>>> 
>>> >>   
filePattern="logs/$${date:-MM}/app-%d{MM-dd-}-%i.log.gz" 
immediateFlush="false" append="true">
>>>  
>>>  
>>>  
>>>  
>>>  
>>>  
>>>  
>>> 
>>> Ron DiFrango
>>> 
>>> On 2/13/17, 3:25 PM, "Matt Sicker"  wrote:
>>> 
>>>  If lots of data are being logged really quickly, then I could see that
>>>  happening.
>>> 
  On 13 February 2017 at 14:20, Apache  
wrote:
 
 The only threads in Log4j that I can think of that are short lived are 
the
 threads that compress the files at rollover. That would only consume a 
lot
 of CPU if rollover is happening a lot.
 
 Ralph
 
> On Feb 13, 2017, at 1:12 PM, Christopher Schultz <
 ch...@christopherschultz.net> wrote:
> 
> Ronald,
> 
>> On 2/13/17 2:29 PM, DiFrango, Ronald wrote:
>> Yeah, I’m aware and agreed, but this is done in the Apache HTTP 
Client
 Component’s Code, not ours and existed as-is with log4j2 in ALL the
 versions we’ve been using.
>> 
>> That said, I think this is a red herring and it’s still true that at
 least for our application any version of log4j2 beyond 2.5 consumes 
more
 CPU that 2.5 this is a REST based APi using Tomcat+Jersey+Spring [again
 nothing has changed] and processing around 50 transactions per second 
with
 a guarantee of a minimum of 1 log record to request.
>> 
>> We just haven’t identified WHERE its causing the issue.
>> 
>> One additional data point, we are using the bridging JAR’s for the
 following as not all 3rd party libraries follow any standard on what 
they
 use:
>> 
>>Classis log4j
>>SLF4J
>>commons-logging
> 
> 
> You mentioned that lots of threads were being created and that they 
were
> short-lived. What code is creating those threads? You never said...
> 
> -chris
> 
>> On 2/13/17, 2:11 PM, "Apache"  wrote:
>> 
>> The log.debug call will exit quickly, but if it is constructing a
 large message that will happen anyway unless one of the standard 
techniques
 (like wrapping in a logger.isDebugEnabled call) is used to avoid it.
 Constructing debug Strings can be expensive.
>> 
>> Ralph
>> 
>>> On Feb 13, 2017, at 10:58 AM, DiFrango, 

Re: CPU Utilization: log4j 2.6 and great

2017-02-18 Thread Apache
Any news on this?

Ralph

> On Feb 13, 2017, at 5:00 PM, DiFrango, Ronald 
>  wrote:
> 
> We’re going to try and pull that for you as we’re still working through was 
> is going on.
> 
> One question, though what would you use in place of the 
> DefaultRolloverStrategy and max?
> 
> Ron DiFrango
> 
> On 2/13/17, 6:43 PM, "Remko Popma"  wrote:
> 
>I've seen cases where the "max" in  max="${maxLogFiles}"/> caused issues because the directory contained many 
> files that all needed to be scanned to see if they matched old log files to 
> delete. 
> 
>That said, it would be good if you can share the results of jstack and do 
> a run with Java Flight Recorder. 
> 
>Sent from my iPhone
> 
>> On Feb 14, 2017, at 8:28, Apache  wrote:
>> 
>> If you are seeing lots of threads can you do a jstack on the process so we 
>> can see what they are?
>> 
>> Ralph
>> 
>>> On Feb 13, 2017, at 3:50 PM, DiFrango, Ronald 
>>>  wrote:
>>> 
>>> Our policy generally looks like the following where max log files is 2 and 
>>> size is 10MB and we 6 appenders configured for various use cases, though 
>>> the default is WARN so other than our performance logs, nothing is written 
>>> to at high volume:
>>> 
>>> >> fileName="${base.log.dir}/app.log"
>>>   
>>> filePattern="logs/$${date:-MM}/app-%d{MM-dd-}-%i.log.gz" 
>>> immediateFlush="false" append="true">
>>>  
>>>  
>>>  
>>>  
>>>  
>>>  
>>>  
>>> 
>>> Ron DiFrango
>>> 
>>> On 2/13/17, 3:25 PM, "Matt Sicker"  wrote:
>>> 
>>>  If lots of data are being logged really quickly, then I could see that
>>>  happening.
>>> 
  On 13 February 2017 at 14:20, Apache  wrote:
 
 The only threads in Log4j that I can think of that are short lived are the
 threads that compress the files at rollover. That would only consume a lot
 of CPU if rollover is happening a lot.
 
 Ralph
 
> On Feb 13, 2017, at 1:12 PM, Christopher Schultz <
 ch...@christopherschultz.net> wrote:
> 
> Ronald,
> 
>> On 2/13/17 2:29 PM, DiFrango, Ronald wrote:
>> Yeah, I’m aware and agreed, but this is done in the Apache HTTP Client
 Component’s Code, not ours and existed as-is with log4j2 in ALL the
 versions we’ve been using.
>> 
>> That said, I think this is a red herring and it’s still true that at
 least for our application any version of log4j2 beyond 2.5 consumes more
 CPU that 2.5 this is a REST based APi using Tomcat+Jersey+Spring [again
 nothing has changed] and processing around 50 transactions per second with
 a guarantee of a minimum of 1 log record to request.
>> 
>> We just haven’t identified WHERE its causing the issue.
>> 
>> One additional data point, we are using the bridging JAR’s for the
 following as not all 3rd party libraries follow any standard on what they
 use:
>> 
>>Classis log4j
>>SLF4J
>>commons-logging
> 
> 
> You mentioned that lots of threads were being created and that they were
> short-lived. What code is creating those threads? You never said...
> 
> -chris
> 
>> On 2/13/17, 2:11 PM, "Apache"  wrote:
>> 
>> The log.debug call will exit quickly, but if it is constructing a
 large message that will happen anyway unless one of the standard techniques
 (like wrapping in a logger.isDebugEnabled call) is used to avoid it.
 Constructing debug Strings can be expensive.
>> 
>> Ralph
>> 
>>> On Feb 13, 2017, at 10:58 AM, DiFrango, Ronald <
 ronald.difra...@capitalone.com> wrote:
>>> 
>>> No we are not and since it’s a log.debug() it should just get dropped
 on the floor before any sort of copying occurs, right?
>>> 
>>> Ron DiFrango
>>> 
>>> On 2/13/17, 12:50 PM, "Matt Sicker"  wrote:
>>> 
>>> Are you logging that info?
>>> 
>>> On 13 February 2017 at 11:46, DiFrango, Ronald <
>>> ronald.difra...@capitalone.com> wrote:
>>> 
 All,
 
 One thing we just noticed is that we are using Apache’s HTTP
 connection
 pooling for our downstream calls and in our latest performance run is
 that
 the READ operation on org.apache.http.impl.conn.LoggingInputStream
 seems
 to be taking a bulk of the time.
 
 If I look at the code, once the HTTP client reads the byte stream it
 issues a log.debug which could be a large payload, I wonder if that’s
 what
 is causing the issue?
 
 Please note aware, the version of httpclient we’ve been using has also
 been constant:
 
 
org.apache.httpcomponents
httpclient

Re: CPU Utilization: log4j 2.6 and great

2017-02-13 Thread Remko Popma
A value of max=2 means that the RolloverAppender ensures no more than 2 rolled 
over log files are kept. But all files in the directory are inspected. If the 
directory contains many files, the scan can take a while. 

An alternative is the  action. 
(http://logging.apache.org/log4j/2.x/manual/appenders.html#CustomDeleteOnRollover)

But before making any changes I would first use the diagnostic tools like 
jstack and jfr to figure out what is going on. 

Remko 

Sent from my iPhone

> On Feb 14, 2017, at 10:20, Apache  wrote:
> 
> With a max of 2 I wouldn’t expect any problems. It does mean that you are 
> only going to keep the last 2 files each day, and the last one might not be 
> full as it might not be full when it rolls over at midnight. If maxFiles was 
> 100 and you had more than 100 files per day then the time it would spend 
> renaming files would be considerable. But it would be doing that on one of 
> your application threads, not as an asynchronous task.
> 
> Ralph
> 
>> On Feb 13, 2017, at 5:00 PM, DiFrango, Ronald 
>>  wrote:
>> 
>> We’re going to try and pull that for you as we’re still working through was 
>> is going on.
>> 
>> One question, though what would you use in place of the 
>> DefaultRolloverStrategy and max?
>> 
>> Ron DiFrango
>> 
>> On 2/13/17, 6:43 PM, "Remko Popma"  wrote:
>> 
>>   I've seen cases where the "max" in > max="${maxLogFiles}"/> caused issues because the directory contained many 
>> files that all needed to be scanned to see if they matched old log files to 
>> delete. 
>> 
>>   That said, it would be good if you can share the results of jstack and do 
>> a run with Java Flight Recorder. 
>> 
>>   Sent from my iPhone
>> 
>>> On Feb 14, 2017, at 8:28, Apache  wrote:
>>> 
>>> If you are seeing lots of threads can you do a jstack on the process so we 
>>> can see what they are?
>>> 
>>> Ralph
>>> 
 On Feb 13, 2017, at 3:50 PM, DiFrango, Ronald 
  wrote:
 
 Our policy generally looks like the following where max log files is 2 and 
 size is 10MB and we 6 appenders configured for various use cases, though 
 the default is WARN so other than our performance logs, nothing is written 
 to at high volume:
 
 >>> fileName="${base.log.dir}/app.log"
  
 filePattern="logs/$${date:-MM}/app-%d{MM-dd-}-%i.log.gz" 
 immediateFlush="false" append="true">
 
 
 
 
 
 
 
 
 Ron DiFrango
 
 On 2/13/17, 3:25 PM, "Matt Sicker"  wrote:
 
 If lots of data are being logged really quickly, then I could see that
 happening.
 
> On 13 February 2017 at 14:20, Apache  wrote:
> 
> The only threads in Log4j that I can think of that are short lived are the
> threads that compress the files at rollover. That would only consume a lot
> of CPU if rollover is happening a lot.
> 
> Ralph
> 
>> On Feb 13, 2017, at 1:12 PM, Christopher Schultz <
> ch...@christopherschultz.net> wrote:
>> 
>> Ronald,
>> 
>>> On 2/13/17 2:29 PM, DiFrango, Ronald wrote:
>>> Yeah, I’m aware and agreed, but this is done in the Apache HTTP Client
> Component’s Code, not ours and existed as-is with log4j2 in ALL the
> versions we’ve been using.
>>> 
>>> That said, I think this is a red herring and it’s still true that at
> least for our application any version of log4j2 beyond 2.5 consumes more
> CPU that 2.5 this is a REST based APi using Tomcat+Jersey+Spring [again
> nothing has changed] and processing around 50 transactions per second with
> a guarantee of a minimum of 1 log record to request.
>>> 
>>> We just haven’t identified WHERE its causing the issue.
>>> 
>>> One additional data point, we are using the bridging JAR’s for the
> following as not all 3rd party libraries follow any standard on what they
> use:
>>> 
>>>   Classis log4j
>>>   SLF4J
>>>   commons-logging
>> 
>> 
>> You mentioned that lots of threads were being created and that they were
>> short-lived. What code is creating those threads? You never said...
>> 
>> -chris
>> 
>>> On 2/13/17, 2:11 PM, "Apache"  wrote:
>>> 
>>> The log.debug call will exit quickly, but if it is constructing a
> large message that will happen anyway unless one of the standard 
> techniques
> (like wrapping in a logger.isDebugEnabled call) is used to avoid it.
> Constructing debug Strings can be expensive.
>>> 
>>> Ralph
>>> 
 On Feb 13, 2017, at 10:58 AM, DiFrango, Ronald <
> ronald.difra...@capitalone.com> wrote:
 
 No we are not and since it’s a 

Re: CPU Utilization: log4j 2.6 and great

2017-02-13 Thread Apache
With a max of 2 I wouldn’t expect any problems. It does mean that you are only 
going to keep the last 2 files each day, and the last one might not be full as 
it might not be full when it rolls over at midnight. If maxFiles was 100 and 
you had more than 100 files per day then the time it would spend renaming files 
would be considerable. But it would be doing that on one of your application 
threads, not as an asynchronous task.

Ralph

> On Feb 13, 2017, at 5:00 PM, DiFrango, Ronald 
>  wrote:
> 
> We’re going to try and pull that for you as we’re still working through was 
> is going on.
> 
> One question, though what would you use in place of the 
> DefaultRolloverStrategy and max?
> 
> Ron DiFrango
> 
> On 2/13/17, 6:43 PM, "Remko Popma"  wrote:
> 
>I've seen cases where the "max" in  max="${maxLogFiles}"/> caused issues because the directory contained many 
> files that all needed to be scanned to see if they matched old log files to 
> delete. 
> 
>That said, it would be good if you can share the results of jstack and do 
> a run with Java Flight Recorder. 
> 
>Sent from my iPhone
> 
>> On Feb 14, 2017, at 8:28, Apache  wrote:
>> 
>> If you are seeing lots of threads can you do a jstack on the process so we 
>> can see what they are?
>> 
>> Ralph
>> 
>>> On Feb 13, 2017, at 3:50 PM, DiFrango, Ronald 
>>>  wrote:
>>> 
>>> Our policy generally looks like the following where max log files is 2 and 
>>> size is 10MB and we 6 appenders configured for various use cases, though 
>>> the default is WARN so other than our performance logs, nothing is written 
>>> to at high volume:
>>> 
>>> >> fileName="${base.log.dir}/app.log"
>>>   
>>> filePattern="logs/$${date:-MM}/app-%d{MM-dd-}-%i.log.gz" 
>>> immediateFlush="false" append="true">
>>>  
>>>  
>>>  
>>>  
>>>  
>>>  
>>>  
>>> 
>>> Ron DiFrango
>>> 
>>> On 2/13/17, 3:25 PM, "Matt Sicker"  wrote:
>>> 
>>>  If lots of data are being logged really quickly, then I could see that
>>>  happening.
>>> 
  On 13 February 2017 at 14:20, Apache  wrote:
 
 The only threads in Log4j that I can think of that are short lived are the
 threads that compress the files at rollover. That would only consume a lot
 of CPU if rollover is happening a lot.
 
 Ralph
 
> On Feb 13, 2017, at 1:12 PM, Christopher Schultz <
 ch...@christopherschultz.net> wrote:
> 
> Ronald,
> 
>> On 2/13/17 2:29 PM, DiFrango, Ronald wrote:
>> Yeah, I’m aware and agreed, but this is done in the Apache HTTP Client
 Component’s Code, not ours and existed as-is with log4j2 in ALL the
 versions we’ve been using.
>> 
>> That said, I think this is a red herring and it’s still true that at
 least for our application any version of log4j2 beyond 2.5 consumes more
 CPU that 2.5 this is a REST based APi using Tomcat+Jersey+Spring [again
 nothing has changed] and processing around 50 transactions per second with
 a guarantee of a minimum of 1 log record to request.
>> 
>> We just haven’t identified WHERE its causing the issue.
>> 
>> One additional data point, we are using the bridging JAR’s for the
 following as not all 3rd party libraries follow any standard on what they
 use:
>> 
>>Classis log4j
>>SLF4J
>>commons-logging
> 
> 
> You mentioned that lots of threads were being created and that they were
> short-lived. What code is creating those threads? You never said...
> 
> -chris
> 
>> On 2/13/17, 2:11 PM, "Apache"  wrote:
>> 
>> The log.debug call will exit quickly, but if it is constructing a
 large message that will happen anyway unless one of the standard techniques
 (like wrapping in a logger.isDebugEnabled call) is used to avoid it.
 Constructing debug Strings can be expensive.
>> 
>> Ralph
>> 
>>> On Feb 13, 2017, at 10:58 AM, DiFrango, Ronald <
 ronald.difra...@capitalone.com> wrote:
>>> 
>>> No we are not and since it’s a log.debug() it should just get dropped
 on the floor before any sort of copying occurs, right?
>>> 
>>> Ron DiFrango
>>> 
>>> On 2/13/17, 12:50 PM, "Matt Sicker"  wrote:
>>> 
>>> Are you logging that info?
>>> 
>>> On 13 February 2017 at 11:46, DiFrango, Ronald <
>>> ronald.difra...@capitalone.com> wrote:
>>> 
 All,
 
 One thing we just noticed is that we are using Apache’s HTTP
 connection
 pooling for our downstream calls and in our latest performance run is
 that
 the READ operation on org.apache.http.impl.conn.LoggingInputStream
 seems
 to be taking a bulk 

Re: CPU Utilization: log4j 2.6 and great

2017-02-13 Thread Remko Popma
I've seen cases where the "max" in  caused issues because the directory contained many files 
that all needed to be scanned to see if they matched old log files to delete. 

That said, it would be good if you can share the results of jstack and do a run 
with Java Flight Recorder. 

Sent from my iPhone

> On Feb 14, 2017, at 8:28, Apache  wrote:
> 
> If you are seeing lots of threads can you do a jstack on the process so we 
> can see what they are?
> 
> Ralph
> 
>> On Feb 13, 2017, at 3:50 PM, DiFrango, Ronald 
>>  wrote:
>> 
>> Our policy generally looks like the following where max log files is 2 and 
>> size is 10MB and we 6 appenders configured for various use cases, though the 
>> default is WARN so other than our performance logs, nothing is written to at 
>> high volume:
>> 
>> >
>> filePattern="logs/$${date:-MM}/app-%d{MM-dd-}-%i.log.gz" 
>> immediateFlush="false" append="true">
>>   
>>   
>>   
>>   
>>   
>>   
>>   
>> 
>> Ron DiFrango
>> 
>> On 2/13/17, 3:25 PM, "Matt Sicker"  wrote:
>> 
>>   If lots of data are being logged really quickly, then I could see that
>>   happening.
>> 
>>>   On 13 February 2017 at 14:20, Apache  wrote:
>>> 
>>> The only threads in Log4j that I can think of that are short lived are the
>>> threads that compress the files at rollover. That would only consume a lot
>>> of CPU if rollover is happening a lot.
>>> 
>>> Ralph
>>> 
 On Feb 13, 2017, at 1:12 PM, Christopher Schultz <
>>> ch...@christopherschultz.net> wrote:
 
 Ronald,
 
> On 2/13/17 2:29 PM, DiFrango, Ronald wrote:
> Yeah, I’m aware and agreed, but this is done in the Apache HTTP Client
>>> Component’s Code, not ours and existed as-is with log4j2 in ALL the
>>> versions we’ve been using.
> 
> That said, I think this is a red herring and it’s still true that at
>>> least for our application any version of log4j2 beyond 2.5 consumes more
>>> CPU that 2.5 this is a REST based APi using Tomcat+Jersey+Spring [again
>>> nothing has changed] and processing around 50 transactions per second with
>>> a guarantee of a minimum of 1 log record to request.
> 
> We just haven’t identified WHERE its causing the issue.
> 
> One additional data point, we are using the bridging JAR’s for the
>>> following as not all 3rd party libraries follow any standard on what they
>>> use:
> 
> Classis log4j
> SLF4J
> commons-logging
 
 
 You mentioned that lots of threads were being created and that they were
 short-lived. What code is creating those threads? You never said...
 
 -chris
 
> On 2/13/17, 2:11 PM, "Apache"  wrote:
> 
>  The log.debug call will exit quickly, but if it is constructing a
>>> large message that will happen anyway unless one of the standard techniques
>>> (like wrapping in a logger.isDebugEnabled call) is used to avoid it.
>>> Constructing debug Strings can be expensive.
> 
>  Ralph
> 
>> On Feb 13, 2017, at 10:58 AM, DiFrango, Ronald <
>>> ronald.difra...@capitalone.com> wrote:
>> 
>> No we are not and since it’s a log.debug() it should just get dropped
>>> on the floor before any sort of copying occurs, right?
>> 
>> Ron DiFrango
>> 
>> On 2/13/17, 12:50 PM, "Matt Sicker"  wrote:
>> 
>> Are you logging that info?
>> 
>> On 13 February 2017 at 11:46, DiFrango, Ronald <
>> ronald.difra...@capitalone.com> wrote:
>> 
>>> All,
>>> 
>>> One thing we just noticed is that we are using Apache’s HTTP
>>> connection
>>> pooling for our downstream calls and in our latest performance run is
>>> that
>>> the READ operation on org.apache.http.impl.conn.LoggingInputStream
>>> seems
>>> to be taking a bulk of the time.
>>> 
>>> If I look at the code, once the HTTP client reads the byte stream it
>>> issues a log.debug which could be a large payload, I wonder if that’s
>>> what
>>> is causing the issue?
>>> 
>>> Please note aware, the version of httpclient we’ve been using has also
>>> been constant:
>>> 
>>> 
>>> org.apache.httpcomponents
>>> httpclient
>>> 4.5.2
>>> 
>>> 
>>> Ron DiFrango
>>> 
>>> 
>>> On 2/13/17, 11:44 AM, "DiFrango, Ronald" >> com>
>>> wrote:
>>> 
>>> This is running in Tomcat 8.0.33 in a Docker Container via AWS’s ECS
>>> which is identical to before with log4j2 2.5.
>>> 
>>> We’ve run the application with Visual VM and the one thing for sure
>>> that we’ve seen is that in 2.6.2 it created tons of threads,
>>> something like
>>> 50+ but on 2.6 or 2.7 it was only 2.  Now the threads were short
>>> lived, but
>>> they got 

Re: CPU Utilization: log4j 2.6 and great

2017-02-13 Thread Apache
If you are seeing lots of threads can you do a jstack on the process so we can 
see what they are?

Ralph

> On Feb 13, 2017, at 3:50 PM, DiFrango, Ronald 
>  wrote:
> 
> Our policy generally looks like the following where max log files is 2 and 
> size is 10MB and we 6 appenders configured for various use cases, though the 
> default is WARN so other than our performance logs, nothing is written to at 
> high volume:
> 
>  
> filePattern="logs/$${date:-MM}/app-%d{MM-dd-}-%i.log.gz" 
> immediateFlush="false" append="true">
>
>
>
>
>
>
>
> 
> Ron DiFrango
> 
> On 2/13/17, 3:25 PM, "Matt Sicker"  wrote:
> 
>If lots of data are being logged really quickly, then I could see that
>happening.
> 
>On 13 February 2017 at 14:20, Apache  wrote:
> 
>> The only threads in Log4j that I can think of that are short lived are the
>> threads that compress the files at rollover. That would only consume a lot
>> of CPU if rollover is happening a lot.
>> 
>> Ralph
>> 
>>> On Feb 13, 2017, at 1:12 PM, Christopher Schultz <
>> ch...@christopherschultz.net> wrote:
>>> 
>>> Ronald,
>>> 
>>> On 2/13/17 2:29 PM, DiFrango, Ronald wrote:
 Yeah, I’m aware and agreed, but this is done in the Apache HTTP Client
>> Component’s Code, not ours and existed as-is with log4j2 in ALL the
>> versions we’ve been using.
 
 That said, I think this is a red herring and it’s still true that at
>> least for our application any version of log4j2 beyond 2.5 consumes more
>> CPU that 2.5 this is a REST based APi using Tomcat+Jersey+Spring [again
>> nothing has changed] and processing around 50 transactions per second with
>> a guarantee of a minimum of 1 log record to request.
 
 We just haven’t identified WHERE its causing the issue.
 
 One additional data point, we are using the bridging JAR’s for the
>> following as not all 3rd party libraries follow any standard on what they
>> use:
 
  Classis log4j
  SLF4J
  commons-logging
>>> 
>>> 
>>> You mentioned that lots of threads were being created and that they were
>>> short-lived. What code is creating those threads? You never said...
>>> 
>>> -chris
>>> 
 On 2/13/17, 2:11 PM, "Apache"  wrote:
 
   The log.debug call will exit quickly, but if it is constructing a
>> large message that will happen anyway unless one of the standard techniques
>> (like wrapping in a logger.isDebugEnabled call) is used to avoid it.
>> Constructing debug Strings can be expensive.
 
   Ralph
 
> On Feb 13, 2017, at 10:58 AM, DiFrango, Ronald <
>> ronald.difra...@capitalone.com> wrote:
> 
> No we are not and since it’s a log.debug() it should just get dropped
>> on the floor before any sort of copying occurs, right?
> 
> Ron DiFrango
> 
> On 2/13/17, 12:50 PM, "Matt Sicker"  wrote:
> 
>  Are you logging that info?
> 
>  On 13 February 2017 at 11:46, DiFrango, Ronald <
>  ronald.difra...@capitalone.com> wrote:
> 
>> All,
>> 
>> One thing we just noticed is that we are using Apache’s HTTP
>> connection
>> pooling for our downstream calls and in our latest performance run is
>> that
>> the READ operation on org.apache.http.impl.conn.LoggingInputStream
>> seems
>> to be taking a bulk of the time.
>> 
>> If I look at the code, once the HTTP client reads the byte stream it
>> issues a log.debug which could be a large payload, I wonder if that’s
>> what
>> is causing the issue?
>> 
>> Please note aware, the version of httpclient we’ve been using has also
>> been constant:
>> 
>> 
>>  org.apache.httpcomponents
>>  httpclient
>>  4.5.2
>> 
>> 
>> Ron DiFrango
>> 
>> 
>> On 2/13/17, 11:44 AM, "DiFrango, Ronald" > com>
>> wrote:
>> 
>>  This is running in Tomcat 8.0.33 in a Docker Container via AWS’s ECS
>> which is identical to before with log4j2 2.5.
>> 
>>  We’ve run the application with Visual VM and the one thing for sure
>> that we’ve seen is that in 2.6.2 it created tons of threads,
>> something like
>> 50+ but on 2.6 or 2.7 it was only 2.  Now the threads were short
>> lived, but
>> they got created.
>> 
>>  We’re running another performance test today with Async loggers to
>> see
>> if that helps or exhibits the same thing though previous testing with
>> Async
>> had some of the same char
>> 
>>  Here’s our layout pattern:
>> 
>> [%t] %d{DATE} %-5p %-15c{1} [%X]: %cm%n
>> 
>>  Please not the %cm is a custom message handler that we use to use to
>> handle security filtering of the message payload aka we extend from
>> 

Re: CPU Utilization: log4j 2.6 and great

2017-02-13 Thread Apache
The only threads in Log4j that I can think of that are short lived are the 
threads that compress the files at rollover. That would only consume a lot of 
CPU if rollover is happening a lot.

Ralph

> On Feb 13, 2017, at 1:12 PM, Christopher Schultz 
>  wrote:
> 
> Ronald,
> 
> On 2/13/17 2:29 PM, DiFrango, Ronald wrote:
>> Yeah, I’m aware and agreed, but this is done in the Apache HTTP Client 
>> Component’s Code, not ours and existed as-is with log4j2 in ALL the versions 
>> we’ve been using.
>> 
>> That said, I think this is a red herring and it’s still true that at least 
>> for our application any version of log4j2 beyond 2.5 consumes more CPU that 
>> 2.5 this is a REST based APi using Tomcat+Jersey+Spring [again nothing has 
>> changed] and processing around 50 transactions per second with a guarantee 
>> of a minimum of 1 log record to request.
>> 
>> We just haven’t identified WHERE its causing the issue.
>> 
>> One additional data point, we are using the bridging JAR’s for the following 
>> as not all 3rd party libraries follow any standard on what they use:
>> 
>>   Classis log4j
>>   SLF4J
>>   commons-logging
> 
> 
> You mentioned that lots of threads were being created and that they were
> short-lived. What code is creating those threads? You never said...
> 
> -chris
> 
>> On 2/13/17, 2:11 PM, "Apache"  wrote:
>> 
>>The log.debug call will exit quickly, but if it is constructing a large 
>> message that will happen anyway unless one of the standard techniques (like 
>> wrapping in a logger.isDebugEnabled call) is used to avoid it. Constructing 
>> debug Strings can be expensive.
>> 
>>Ralph
>> 
>>> On Feb 13, 2017, at 10:58 AM, DiFrango, Ronald 
>>>  wrote:
>>> 
>>> No we are not and since it’s a log.debug() it should just get dropped on 
>>> the floor before any sort of copying occurs, right?
>>> 
>>> Ron DiFrango
>>> 
>>> On 2/13/17, 12:50 PM, "Matt Sicker"  wrote:
>>> 
>>>   Are you logging that info?
>>> 
>>>   On 13 February 2017 at 11:46, DiFrango, Ronald <
>>>   ronald.difra...@capitalone.com> wrote:
>>> 
 All,
 
 One thing we just noticed is that we are using Apache’s HTTP connection
 pooling for our downstream calls and in our latest performance run is that
 the READ operation on org.apache.http.impl.conn.LoggingInputStream seems
 to be taking a bulk of the time.
 
 If I look at the code, once the HTTP client reads the byte stream it
 issues a log.debug which could be a large payload, I wonder if that’s what
 is causing the issue?
 
 Please note aware, the version of httpclient we’ve been using has also
 been constant:
 
 
   org.apache.httpcomponents
   httpclient
   4.5.2
 
 
 Ron DiFrango
 
 
 On 2/13/17, 11:44 AM, "DiFrango, Ronald" 
 wrote:
 
   This is running in Tomcat 8.0.33 in a Docker Container via AWS’s ECS
 which is identical to before with log4j2 2.5.
 
   We’ve run the application with Visual VM and the one thing for sure
 that we’ve seen is that in 2.6.2 it created tons of threads, something like
 50+ but on 2.6 or 2.7 it was only 2.  Now the threads were short lived, but
 they got created.
 
   We’re running another performance test today with Async loggers to see
 if that helps or exhibits the same thing though previous testing with Async
 had some of the same char
 
   Here’s our layout pattern:
 
  [%t] %d{DATE} %-5p %-15c{1} [%X]: %cm%n
 
   Please not the %cm is a custom message handler that we use to use to
 handle security filtering of the message payload aka we extend from
 LogEventPatternConverter.
 
   Thanks,
 
   Ron DiFrango
 
   On 2/13/17, 11:22 AM, "Matt Sicker"  wrote:
 
   What server environment are you running this in?
 
   On 13 February 2017 at 09:19, Remko Popma 
 wrote:
 
> Ron,
> 
> We haven't heard of any issues like you describe.
> Have you tried running your application with Java Flight Recorder
>  runtime-guide/run.htm#JFRUH176>?
> This should help diagnose what is going on.
> 
> Remko
> 
> 
> 
> On Mon, Feb 13, 2017 at 9:59 PM, DiFrango, Ronald <
> ronald.difra...@capitalone.com> wrote:
> 
>> All,
>> 
>> We recently upgrade to 2.6 and noticed a dramatic increase in
 CPU and
>> Thread utilization that seems to be tied to the new “garbage
 free” mode
> of
>> log4j  2.6.  Here’s some of the baseline numbers:
>> 
>> 
>> · Log4j 2.5: CPU typically ran around 25%
>> 
>> · Log4j 2.6: CPU typically ran around 75%

RE: CPU Utilization: log4j 2.6 and great

2017-02-13 Thread DiFrango, Ronald
They were all tied to log4j2 itself, though this seemed to primarily occur with 
2.6.2 and went away in 2.7

Ron DiFrango

From: Christopher Schultz <ch...@christopherschultz.net>
Sent: Monday, February 13, 2017 3:12:34 PM
To: Log4J Users List
Subject: Re: CPU Utilization: log4j 2.6 and great

Ronald,

On 2/13/17 2:29 PM, DiFrango, Ronald wrote:
> Yeah, I’m aware and agreed, but this is done in the Apache HTTP Client 
> Component’s Code, not ours and existed as-is with log4j2 in ALL the versions 
> we’ve been using.
>
> That said, I think this is a red herring and it’s still true that at least 
> for our application any version of log4j2 beyond 2.5 consumes more CPU that 
> 2.5 this is a REST based APi using Tomcat+Jersey+Spring [again nothing has 
> changed] and processing around 50 transactions per second with a guarantee of 
> a minimum of 1 log record to request.
>
> We just haven’t identified WHERE its causing the issue.
>
> One additional data point, we are using the bridging JAR’s for the following 
> as not all 3rd party libraries follow any standard on what they use:
>
>Classis log4j
>SLF4J
>commons-logging


You mentioned that lots of threads were being created and that they were
short-lived. What code is creating those threads? You never said...

-chris

> On 2/13/17, 2:11 PM, "Apache" <ralph.go...@dslextreme.com> wrote:
>
> The log.debug call will exit quickly, but if it is constructing a large 
> message that will happen anyway unless one of the standard techniques (like 
> wrapping in a logger.isDebugEnabled call) is used to avoid it. Constructing 
> debug Strings can be expensive.
>
> Ralph
>
> > On Feb 13, 2017, at 10:58 AM, DiFrango, Ronald 
> <ronald.difra...@capitalone.com> wrote:
> >
> > No we are not and since it’s a log.debug() it should just get dropped 
> on the floor before any sort of copying occurs, right?
> >
> > Ron DiFrango
> >
> > On 2/13/17, 12:50 PM, "Matt Sicker" <boa...@gmail.com> wrote:
> >
> >Are you logging that info?
> >
> >On 13 February 2017 at 11:46, DiFrango, Ronald <
> >ronald.difra...@capitalone.com> wrote:
> >
> >> All,
> >>
> >> One thing we just noticed is that we are using Apache’s HTTP connection
> >> pooling for our downstream calls and in our latest performance run is 
> that
> >> the READ operation on org.apache.http.impl.conn.LoggingInputStream 
> seems
> >> to be taking a bulk of the time.
> >>
> >> If I look at the code, once the HTTP client reads the byte stream it
> >> issues a log.debug which could be a large payload, I wonder if that’s 
> what
> >> is causing the issue?
> >>
> >> Please note aware, the version of httpclient we’ve been using has also
> >> been constant:
> >>
> >> 
> >>org.apache.httpcomponents
> >>httpclient
> >>4.5.2
> >> 
> >>
> >> Ron DiFrango
> >>
> >>
> >> On 2/13/17, 11:44 AM, "DiFrango, Ronald" 
> <ronald.difra...@capitalone.com>
> >> wrote:
> >>
> >>This is running in Tomcat 8.0.33 in a Docker Container via AWS’s ECS
> >> which is identical to before with log4j2 2.5.
> >>
> >>We’ve run the application with Visual VM and the one thing for sure
> >> that we’ve seen is that in 2.6.2 it created tons of threads, something 
> like
> >> 50+ but on 2.6 or 2.7 it was only 2.  Now the threads were short 
> lived, but
> >> they got created.
> >>
> >>We’re running another performance test today with Async loggers to 
> see
> >> if that helps or exhibits the same thing though previous testing with 
> Async
> >> had some of the same char
> >>
> >>Here’s our layout pattern:
> >>
> >>   [%t] %d{DATE} %-5p %-15c{1} [%X]: %cm%n
> >>
> >>Please not the %cm is a custom message handler that we use to use to
> >> handle security filtering of the message payload aka we extend from
> >> LogEventPatternConverter.
> >>
> >>Thanks,
> >>
> >>Ron DiFrango
> >>
> >>On 2/13/17, 11:22 AM, "Matt Sicker" <boa...@gmail.com> wrote:
> >>

Re: CPU Utilization: log4j 2.6 and great

2017-02-13 Thread Christopher Schultz
Ronald,

On 2/13/17 2:29 PM, DiFrango, Ronald wrote:
> Yeah, I’m aware and agreed, but this is done in the Apache HTTP Client 
> Component’s Code, not ours and existed as-is with log4j2 in ALL the versions 
> we’ve been using.
> 
> That said, I think this is a red herring and it’s still true that at least 
> for our application any version of log4j2 beyond 2.5 consumes more CPU that 
> 2.5 this is a REST based APi using Tomcat+Jersey+Spring [again nothing has 
> changed] and processing around 50 transactions per second with a guarantee of 
> a minimum of 1 log record to request.
> 
> We just haven’t identified WHERE its causing the issue.
> 
> One additional data point, we are using the bridging JAR’s for the following 
> as not all 3rd party libraries follow any standard on what they use:
> 
>Classis log4j
>SLF4J
>commons-logging


You mentioned that lots of threads were being created and that they were
short-lived. What code is creating those threads? You never said...

-chris

> On 2/13/17, 2:11 PM, "Apache"  wrote:
> 
> The log.debug call will exit quickly, but if it is constructing a large 
> message that will happen anyway unless one of the standard techniques (like 
> wrapping in a logger.isDebugEnabled call) is used to avoid it. Constructing 
> debug Strings can be expensive.
> 
> Ralph
> 
> > On Feb 13, 2017, at 10:58 AM, DiFrango, Ronald 
>  wrote:
> > 
> > No we are not and since it’s a log.debug() it should just get dropped 
> on the floor before any sort of copying occurs, right?
> > 
> > Ron DiFrango
> > 
> > On 2/13/17, 12:50 PM, "Matt Sicker"  wrote:
> > 
> >Are you logging that info?
> > 
> >On 13 February 2017 at 11:46, DiFrango, Ronald <
> >ronald.difra...@capitalone.com> wrote:
> > 
> >> All,
> >> 
> >> One thing we just noticed is that we are using Apache’s HTTP connection
> >> pooling for our downstream calls and in our latest performance run is 
> that
> >> the READ operation on org.apache.http.impl.conn.LoggingInputStream 
> seems
> >> to be taking a bulk of the time.
> >> 
> >> If I look at the code, once the HTTP client reads the byte stream it
> >> issues a log.debug which could be a large payload, I wonder if that’s 
> what
> >> is causing the issue?
> >> 
> >> Please note aware, the version of httpclient we’ve been using has also
> >> been constant:
> >> 
> >> 
> >>org.apache.httpcomponents
> >>httpclient
> >>4.5.2
> >> 
> >> 
> >> Ron DiFrango
> >> 
> >> 
> >> On 2/13/17, 11:44 AM, "DiFrango, Ronald" 
> 
> >> wrote:
> >> 
> >>This is running in Tomcat 8.0.33 in a Docker Container via AWS’s ECS
> >> which is identical to before with log4j2 2.5.
> >> 
> >>We’ve run the application with Visual VM and the one thing for sure
> >> that we’ve seen is that in 2.6.2 it created tons of threads, something 
> like
> >> 50+ but on 2.6 or 2.7 it was only 2.  Now the threads were short 
> lived, but
> >> they got created.
> >> 
> >>We’re running another performance test today with Async loggers to 
> see
> >> if that helps or exhibits the same thing though previous testing with 
> Async
> >> had some of the same char
> >> 
> >>Here’s our layout pattern:
> >> 
> >>   [%t] %d{DATE} %-5p %-15c{1} [%X]: %cm%n
> >> 
> >>Please not the %cm is a custom message handler that we use to use to
> >> handle security filtering of the message payload aka we extend from
> >> LogEventPatternConverter.
> >> 
> >>Thanks,
> >> 
> >>Ron DiFrango
> >> 
> >>On 2/13/17, 11:22 AM, "Matt Sicker"  wrote:
> >> 
> >>What server environment are you running this in?
> >> 
> >>On 13 February 2017 at 09:19, Remko Popma 
> 
> >> wrote:
> >> 
> >>> Ron,
> >>> 
> >>> We haven't heard of any issues like you describe.
> >>> Have you tried running your application with Java Flight Recorder
> >>>  >>> runtime-guide/run.htm#JFRUH176>?
> >>> This should help diagnose what is going on.
> >>> 
> >>> Remko
> >>> 
> >>> 
> >>> 
> >>> On Mon, Feb 13, 2017 at 9:59 PM, DiFrango, Ronald <
> >>> ronald.difra...@capitalone.com> wrote:
> >>> 
>  All,
>  
>  We recently upgrade to 2.6 and noticed a dramatic increase in
> >> CPU and
>  Thread utilization that seems to be tied to the new “garbage
> >> free” mode
> >>> of
>  log4j  2.6.  Here’s some of the baseline numbers:
>  
>  
>  · Log4j 2.5: 

Re: CPU Utilization: log4j 2.6 and great

2017-02-13 Thread DiFrango, Ronald
Ralph,

Yeah, I’m aware and agreed, but this is done in the Apache HTTP Client 
Component’s Code, not ours and existed as-is with log4j2 in ALL the versions 
we’ve been using.

That said, I think this is a red herring and it’s still true that at least for 
our application any version of log4j2 beyond 2.5 consumes more CPU that 2.5 
this is a REST based APi using Tomcat+Jersey+Spring [again nothing has changed] 
and processing around 50 transactions per second with a guarantee of a minimum 
of 1 log record to request.

We just haven’t identified WHERE its causing the issue.

One additional data point, we are using the bridging JAR’s for the following as 
not all 3rd party libraries follow any standard on what they use:

   Classis log4j
   SLF4J
   commons-logging
Thanks,

Ron DiFrango

On 2/13/17, 2:11 PM, "Apache"  wrote:

The log.debug call will exit quickly, but if it is constructing a large 
message that will happen anyway unless one of the standard techniques (like 
wrapping in a logger.isDebugEnabled call) is used to avoid it. Constructing 
debug Strings can be expensive.

Ralph

> On Feb 13, 2017, at 10:58 AM, DiFrango, Ronald 
 wrote:
> 
> No we are not and since it’s a log.debug() it should just get dropped on 
the floor before any sort of copying occurs, right?
> 
> Ron DiFrango
> 
> On 2/13/17, 12:50 PM, "Matt Sicker"  wrote:
> 
>Are you logging that info?
> 
>On 13 February 2017 at 11:46, DiFrango, Ronald <
>ronald.difra...@capitalone.com> wrote:
> 
>> All,
>> 
>> One thing we just noticed is that we are using Apache’s HTTP connection
>> pooling for our downstream calls and in our latest performance run is 
that
>> the READ operation on org.apache.http.impl.conn.LoggingInputStream seems
>> to be taking a bulk of the time.
>> 
>> If I look at the code, once the HTTP client reads the byte stream it
>> issues a log.debug which could be a large payload, I wonder if that’s 
what
>> is causing the issue?
>> 
>> Please note aware, the version of httpclient we’ve been using has also
>> been constant:
>> 
>> 
>>org.apache.httpcomponents
>>httpclient
>>4.5.2
>> 
>> 
>> Ron DiFrango
>> 
>> 
>> On 2/13/17, 11:44 AM, "DiFrango, Ronald" 
>> wrote:
>> 
>>This is running in Tomcat 8.0.33 in a Docker Container via AWS’s ECS
>> which is identical to before with log4j2 2.5.
>> 
>>We’ve run the application with Visual VM and the one thing for sure
>> that we’ve seen is that in 2.6.2 it created tons of threads, something 
like
>> 50+ but on 2.6 or 2.7 it was only 2.  Now the threads were short lived, 
but
>> they got created.
>> 
>>We’re running another performance test today with Async loggers to see
>> if that helps or exhibits the same thing though previous testing with 
Async
>> had some of the same char
>> 
>>Here’s our layout pattern:
>> 
>>   [%t] %d{DATE} %-5p %-15c{1} [%X]: %cm%n
>> 
>>Please not the %cm is a custom message handler that we use to use to
>> handle security filtering of the message payload aka we extend from
>> LogEventPatternConverter.
>> 
>>Thanks,
>> 
>>Ron DiFrango
>> 
>>On 2/13/17, 11:22 AM, "Matt Sicker"  wrote:
>> 
>>What server environment are you running this in?
>> 
>>On 13 February 2017 at 09:19, Remko Popma 
>> wrote:
>> 
>>> Ron,
>>> 
>>> We haven't heard of any issues like you describe.
>>> Have you tried running your application with Java Flight Recorder
>>> >> runtime-guide/run.htm#JFRUH176>?
>>> This should help diagnose what is going on.
>>> 
>>> Remko
>>> 
>>> 
>>> 
>>> On Mon, Feb 13, 2017 at 9:59 PM, DiFrango, Ronald <
>>> ronald.difra...@capitalone.com> wrote:
>>> 
 All,
 
 We recently upgrade to 2.6 and noticed a dramatic increase in
>> CPU and
 Thread utilization that seems to be tied to the new “garbage
>> free” mode
>>> of
 log4j  2.6.  Here’s some of the baseline numbers:
 
 
 · Log4j 2.5: CPU typically ran around 25%
 
 · Log4j 2.6: CPU typically ran around 75%
 
 · Log4j 2.6.2+: CPU typically ran around 100%
 
 · We’ve also tried turning off garbage free mode and
>> that made
 things worse as the CPU was around 120% and caused us to not
>> meet our
>>> SLA’s
 
 It important to note that this is a REST Api using Jersey and

Re: CPU Utilization: log4j 2.6 and great

2017-02-13 Thread Apache
The log.debug call will exit quickly, but if it is constructing a large message 
that will happen anyway unless one of the standard techniques (like wrapping in 
a logger.isDebugEnabled call) is used to avoid it. Constructing debug Strings 
can be expensive.

Ralph

> On Feb 13, 2017, at 10:58 AM, DiFrango, Ronald 
>  wrote:
> 
> No we are not and since it’s a log.debug() it should just get dropped on the 
> floor before any sort of copying occurs, right?
> 
> Ron DiFrango
> 
> On 2/13/17, 12:50 PM, "Matt Sicker"  wrote:
> 
>Are you logging that info?
> 
>On 13 February 2017 at 11:46, DiFrango, Ronald <
>ronald.difra...@capitalone.com> wrote:
> 
>> All,
>> 
>> One thing we just noticed is that we are using Apache’s HTTP connection
>> pooling for our downstream calls and in our latest performance run is that
>> the READ operation on org.apache.http.impl.conn.LoggingInputStream seems
>> to be taking a bulk of the time.
>> 
>> If I look at the code, once the HTTP client reads the byte stream it
>> issues a log.debug which could be a large payload, I wonder if that’s what
>> is causing the issue?
>> 
>> Please note aware, the version of httpclient we’ve been using has also
>> been constant:
>> 
>> 
>>org.apache.httpcomponents
>>httpclient
>>4.5.2
>> 
>> 
>> Ron DiFrango
>> 
>> 
>> On 2/13/17, 11:44 AM, "DiFrango, Ronald" 
>> wrote:
>> 
>>This is running in Tomcat 8.0.33 in a Docker Container via AWS’s ECS
>> which is identical to before with log4j2 2.5.
>> 
>>We’ve run the application with Visual VM and the one thing for sure
>> that we’ve seen is that in 2.6.2 it created tons of threads, something like
>> 50+ but on 2.6 or 2.7 it was only 2.  Now the threads were short lived, but
>> they got created.
>> 
>>We’re running another performance test today with Async loggers to see
>> if that helps or exhibits the same thing though previous testing with Async
>> had some of the same char
>> 
>>Here’s our layout pattern:
>> 
>>   [%t] %d{DATE} %-5p %-15c{1} [%X]: %cm%n
>> 
>>Please not the %cm is a custom message handler that we use to use to
>> handle security filtering of the message payload aka we extend from
>> LogEventPatternConverter.
>> 
>>Thanks,
>> 
>>Ron DiFrango
>> 
>>On 2/13/17, 11:22 AM, "Matt Sicker"  wrote:
>> 
>>What server environment are you running this in?
>> 
>>On 13 February 2017 at 09:19, Remko Popma 
>> wrote:
>> 
>>> Ron,
>>> 
>>> We haven't heard of any issues like you describe.
>>> Have you tried running your application with Java Flight Recorder
>>> >> runtime-guide/run.htm#JFRUH176>?
>>> This should help diagnose what is going on.
>>> 
>>> Remko
>>> 
>>> 
>>> 
>>> On Mon, Feb 13, 2017 at 9:59 PM, DiFrango, Ronald <
>>> ronald.difra...@capitalone.com> wrote:
>>> 
 All,
 
 We recently upgrade to 2.6 and noticed a dramatic increase in
>> CPU and
 Thread utilization that seems to be tied to the new “garbage
>> free” mode
>>> of
 log4j  2.6.  Here’s some of the baseline numbers:
 
 
 · Log4j 2.5: CPU typically ran around 25%
 
 · Log4j 2.6: CPU typically ran around 75%
 
 · Log4j 2.6.2+: CPU typically ran around 100%
 
 · We’ve also tried turning off garbage free mode and
>> that made
 things worse as the CPU was around 120% and caused us to not
>> meet our
>>> SLA’s
 
 It important to note that this is a REST Api using Jersey and
>> typically
 responds in about in under 50ms on a per request so its high
>> volume, but
 the logging level is WARN or higher except for our single
>> performance log
 record which is written once per request using the lambda base
>> approach.
 
 Our next test is going to be switching to all ASYNC loggers
>> and see what
 effect that has, but I guess the general question is, has
>> anyone else
>>> seen
 this?  Any thoughts?
 
 Ron DiFrango
 
 
 The information contained in this e-mail is confidential and/or
 proprietary to Capital One and/or its affiliates and may only
>> be used
 solely in performance of work or services for Capital One. The
>>> information
 transmitted herewith is intended only for use by the
>> individual or entity
 to which it is addressed. If the reader of this message is not
>> the
>>> intended
 recipient, you are hereby notified that any review,
>> retransmission,
 dissemination, distribution, copying or other use of, or
>> taking of any
 action in reliance upon this information is strictly
>> prohibited. If you
 have received this communication in error, please contact the
>> sender and
 delete the material from your computer.
 
>>> 
>> 

Re: CPU Utilization: log4j 2.6 and great

2017-02-13 Thread DiFrango, Ronald
No we are not and since it’s a log.debug() it should just get dropped on the 
floor before any sort of copying occurs, right?

Ron DiFrango

On 2/13/17, 12:50 PM, "Matt Sicker"  wrote:

Are you logging that info?

On 13 February 2017 at 11:46, DiFrango, Ronald <
ronald.difra...@capitalone.com> wrote:

> All,
>
> One thing we just noticed is that we are using Apache’s HTTP connection
> pooling for our downstream calls and in our latest performance run is that
> the READ operation on org.apache.http.impl.conn.LoggingInputStream seems
> to be taking a bulk of the time.
>
> If I look at the code, once the HTTP client reads the byte stream it
> issues a log.debug which could be a large payload, I wonder if that’s what
> is causing the issue?
>
> Please note aware, the version of httpclient we’ve been using has also
> been constant:
>
> 
> org.apache.httpcomponents
> httpclient
> 4.5.2
> 
>
> Ron DiFrango
>
>
> On 2/13/17, 11:44 AM, "DiFrango, Ronald" 
> wrote:
>
> This is running in Tomcat 8.0.33 in a Docker Container via AWS’s ECS
> which is identical to before with log4j2 2.5.
>
> We’ve run the application with Visual VM and the one thing for sure
> that we’ve seen is that in 2.6.2 it created tons of threads, something 
like
> 50+ but on 2.6 or 2.7 it was only 2.  Now the threads were short lived, 
but
> they got created.
>
> We’re running another performance test today with Async loggers to see
> if that helps or exhibits the same thing though previous testing with 
Async
> had some of the same char
>
> Here’s our layout pattern:
>
>[%t] %d{DATE} %-5p %-15c{1} [%X]: %cm%n
>
> Please not the %cm is a custom message handler that we use to use to
> handle security filtering of the message payload aka we extend from
> LogEventPatternConverter.
>
> Thanks,
>
> Ron DiFrango
>
> On 2/13/17, 11:22 AM, "Matt Sicker"  wrote:
>
> What server environment are you running this in?
>
> On 13 February 2017 at 09:19, Remko Popma 
> wrote:
>
> > Ron,
> >
> > We haven't heard of any issues like you describe.
> > Have you tried running your application with Java Flight 
Recorder
> >  > runtime-guide/run.htm#JFRUH176>?
> > This should help diagnose what is going on.
> >
> > Remko
> >
> >
> >
> > On Mon, Feb 13, 2017 at 9:59 PM, DiFrango, Ronald <
> > ronald.difra...@capitalone.com> wrote:
> >
> > > All,
> > >
> > > We recently upgrade to 2.6 and noticed a dramatic increase in
> CPU and
> > > Thread utilization that seems to be tied to the new “garbage
> free” mode
> > of
> > > log4j  2.6.  Here’s some of the baseline numbers:
> > >
> > >
> > > · Log4j 2.5: CPU typically ran around 25%
> > >
> > > · Log4j 2.6: CPU typically ran around 75%
> > >
> > > · Log4j 2.6.2+: CPU typically ran around 100%
> > >
> > > · We’ve also tried turning off garbage free mode and
> that made
> > > things worse as the CPU was around 120% and caused us to not
> meet our
> > SLA’s
> > >
> > > It important to note that this is a REST Api using Jersey and
> typically
> > > responds in about in under 50ms on a per request so its high
> volume, but
> > > the logging level is WARN or higher except for our single
> performance log
> > > record which is written once per request using the lambda base
> approach.
> > >
> > > Our next test is going to be switching to all ASYNC loggers
> and see what
> > > effect that has, but I guess the general question is, has
> anyone else
> > seen
> > > this?  Any thoughts?
> > >
> > > Ron DiFrango
> > > 
> > >
> > > The information contained in this e-mail is confidential 
and/or
> > > proprietary to Capital One and/or its affiliates and may only
> be used
> > > solely in performance of work or services for Capital One. The
> > information
> > > transmitted herewith is intended only for use by the
> individual or entity
   

Re: CPU Utilization: log4j 2.6 and great

2017-02-13 Thread Matt Sicker
Are you logging that info?

On 13 February 2017 at 11:46, DiFrango, Ronald <
ronald.difra...@capitalone.com> wrote:

> All,
>
> One thing we just noticed is that we are using Apache’s HTTP connection
> pooling for our downstream calls and in our latest performance run is that
> the READ operation on org.apache.http.impl.conn.LoggingInputStream seems
> to be taking a bulk of the time.
>
> If I look at the code, once the HTTP client reads the byte stream it
> issues a log.debug which could be a large payload, I wonder if that’s what
> is causing the issue?
>
> Please note aware, the version of httpclient we’ve been using has also
> been constant:
>
> 
> org.apache.httpcomponents
> httpclient
> 4.5.2
> 
>
> Ron DiFrango
>
>
> On 2/13/17, 11:44 AM, "DiFrango, Ronald" 
> wrote:
>
> This is running in Tomcat 8.0.33 in a Docker Container via AWS’s ECS
> which is identical to before with log4j2 2.5.
>
> We’ve run the application with Visual VM and the one thing for sure
> that we’ve seen is that in 2.6.2 it created tons of threads, something like
> 50+ but on 2.6 or 2.7 it was only 2.  Now the threads were short lived, but
> they got created.
>
> We’re running another performance test today with Async loggers to see
> if that helps or exhibits the same thing though previous testing with Async
> had some of the same char
>
> Here’s our layout pattern:
>
>[%t] %d{DATE} %-5p %-15c{1} [%X]: %cm%n
>
> Please not the %cm is a custom message handler that we use to use to
> handle security filtering of the message payload aka we extend from
> LogEventPatternConverter.
>
> Thanks,
>
> Ron DiFrango
>
> On 2/13/17, 11:22 AM, "Matt Sicker"  wrote:
>
> What server environment are you running this in?
>
> On 13 February 2017 at 09:19, Remko Popma 
> wrote:
>
> > Ron,
> >
> > We haven't heard of any issues like you describe.
> > Have you tried running your application with Java Flight Recorder
> >  > runtime-guide/run.htm#JFRUH176>?
> > This should help diagnose what is going on.
> >
> > Remko
> >
> >
> >
> > On Mon, Feb 13, 2017 at 9:59 PM, DiFrango, Ronald <
> > ronald.difra...@capitalone.com> wrote:
> >
> > > All,
> > >
> > > We recently upgrade to 2.6 and noticed a dramatic increase in
> CPU and
> > > Thread utilization that seems to be tied to the new “garbage
> free” mode
> > of
> > > log4j  2.6.  Here’s some of the baseline numbers:
> > >
> > >
> > > · Log4j 2.5: CPU typically ran around 25%
> > >
> > > · Log4j 2.6: CPU typically ran around 75%
> > >
> > > · Log4j 2.6.2+: CPU typically ran around 100%
> > >
> > > · We’ve also tried turning off garbage free mode and
> that made
> > > things worse as the CPU was around 120% and caused us to not
> meet our
> > SLA’s
> > >
> > > It important to note that this is a REST Api using Jersey and
> typically
> > > responds in about in under 50ms on a per request so its high
> volume, but
> > > the logging level is WARN or higher except for our single
> performance log
> > > record which is written once per request using the lambda base
> approach.
> > >
> > > Our next test is going to be switching to all ASYNC loggers
> and see what
> > > effect that has, but I guess the general question is, has
> anyone else
> > seen
> > > this?  Any thoughts?
> > >
> > > Ron DiFrango
> > > 
> > >
> > > The information contained in this e-mail is confidential and/or
> > > proprietary to Capital One and/or its affiliates and may only
> be used
> > > solely in performance of work or services for Capital One. The
> > information
> > > transmitted herewith is intended only for use by the
> individual or entity
> > > to which it is addressed. If the reader of this message is not
> the
> > intended
> > > recipient, you are hereby notified that any review,
> retransmission,
> > > dissemination, distribution, copying or other use of, or
> taking of any
> > > action in reliance upon this information is strictly
> prohibited. If you
> > > have received this communication in error, please contact the
> sender and
> > > delete the material from your computer.
> > >
> >
>
>
>
> --
> Matt Sicker 
>
>
> 
>
> The information contained in this e-mail is 

Re: CPU Utilization: log4j 2.6 and great

2017-02-13 Thread DiFrango, Ronald
All,

One thing we just noticed is that we are using Apache’s HTTP connection pooling 
for our downstream calls and in our latest performance run is that the READ 
operation on org.apache.http.impl.conn.LoggingInputStream seems to be taking a 
bulk of the time.

If I look at the code, once the HTTP client reads the byte stream it issues a 
log.debug which could be a large payload, I wonder if that’s what is causing 
the issue?

Please note aware, the version of httpclient we’ve been using has also been 
constant:


org.apache.httpcomponents
httpclient
4.5.2


Ron DiFrango


On 2/13/17, 11:44 AM, "DiFrango, Ronald"  wrote:

This is running in Tomcat 8.0.33 in a Docker Container via AWS’s ECS which 
is identical to before with log4j2 2.5.

We’ve run the application with Visual VM and the one thing for sure that 
we’ve seen is that in 2.6.2 it created tons of threads, something like 50+ but 
on 2.6 or 2.7 it was only 2.  Now the threads were short lived, but they got 
created.

We’re running another performance test today with Async loggers to see if 
that helps or exhibits the same thing though previous testing with Async had 
some of the same char

Here’s our layout pattern:

   [%t] %d{DATE} %-5p %-15c{1} [%X]: %cm%n

Please not the %cm is a custom message handler that we use to use to handle 
security filtering of the message payload aka we extend from 
LogEventPatternConverter.

Thanks,

Ron DiFrango

On 2/13/17, 11:22 AM, "Matt Sicker"  wrote:

What server environment are you running this in?

On 13 February 2017 at 09:19, Remko Popma  wrote:

> Ron,
>
> We haven't heard of any issues like you describe.
> Have you tried running your application with Java Flight Recorder
>  runtime-guide/run.htm#JFRUH176>?
> This should help diagnose what is going on.
>
> Remko
>
>
>
> On Mon, Feb 13, 2017 at 9:59 PM, DiFrango, Ronald <
> ronald.difra...@capitalone.com> wrote:
>
> > All,
> >
> > We recently upgrade to 2.6 and noticed a dramatic increase in CPU 
and
> > Thread utilization that seems to be tied to the new “garbage free” 
mode
> of
> > log4j  2.6.  Here’s some of the baseline numbers:
> >
> >
> > · Log4j 2.5: CPU typically ran around 25%
> >
> > · Log4j 2.6: CPU typically ran around 75%
> >
> > · Log4j 2.6.2+: CPU typically ran around 100%
> >
> > · We’ve also tried turning off garbage free mode and that 
made
> > things worse as the CPU was around 120% and caused us to not meet 
our
> SLA’s
> >
> > It important to note that this is a REST Api using Jersey and 
typically
> > responds in about in under 50ms on a per request so its high 
volume, but
> > the logging level is WARN or higher except for our single 
performance log
> > record which is written once per request using the lambda base 
approach.
> >
> > Our next test is going to be switching to all ASYNC loggers and see 
what
> > effect that has, but I guess the general question is, has anyone 
else
> seen
> > this?  Any thoughts?
> >
> > Ron DiFrango
> > 
> >
> > The information contained in this e-mail is confidential and/or
> > proprietary to Capital One and/or its affiliates and may only be 
used
> > solely in performance of work or services for Capital One. The
> information
> > transmitted herewith is intended only for use by the individual or 
entity
> > to which it is addressed. If the reader of this message is not the
> intended
> > recipient, you are hereby notified that any review, retransmission,
> > dissemination, distribution, copying or other use of, or taking of 
any
> > action in reliance upon this information is strictly prohibited. If 
you
> > have received this communication in error, please contact the 
sender and
> > delete the material from your computer.
> >
>



-- 
Matt Sicker 




The information contained in this e-mail is confidential and/or proprietary 
to Capital One and/or its affiliates and may only be used solely in performance 
of work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is 

Re: CPU Utilization: log4j 2.6 and great

2017-02-13 Thread DiFrango, Ronald
This is running in Tomcat 8.0.33 in a Docker Container via AWS’s ECS which is 
identical to before with log4j2 2.5.

We’ve run the application with Visual VM and the one thing for sure that we’ve 
seen is that in 2.6.2 it created tons of threads, something like 50+ but on 2.6 
or 2.7 it was only 2.  Now the threads were short lived, but they got created.

We’re running another performance test today with Async loggers to see if that 
helps or exhibits the same thing though previous testing with Async had some of 
the same char

Here’s our layout pattern:

   [%t] %d{DATE} %-5p %-15c{1} [%X]: %cm%n

Please not the %cm is a custom message handler that we use to use to handle 
security filtering of the message payload aka we extend from 
LogEventPatternConverter.

Thanks,

Ron DiFrango

On 2/13/17, 11:22 AM, "Matt Sicker"  wrote:

What server environment are you running this in?

On 13 February 2017 at 09:19, Remko Popma  wrote:

> Ron,
>
> We haven't heard of any issues like you describe.
> Have you tried running your application with Java Flight Recorder
>  runtime-guide/run.htm#JFRUH176>?
> This should help diagnose what is going on.
>
> Remko
>
>
>
> On Mon, Feb 13, 2017 at 9:59 PM, DiFrango, Ronald <
> ronald.difra...@capitalone.com> wrote:
>
> > All,
> >
> > We recently upgrade to 2.6 and noticed a dramatic increase in CPU and
> > Thread utilization that seems to be tied to the new “garbage free” mode
> of
> > log4j  2.6.  Here’s some of the baseline numbers:
> >
> >
> > · Log4j 2.5: CPU typically ran around 25%
> >
> > · Log4j 2.6: CPU typically ran around 75%
> >
> > · Log4j 2.6.2+: CPU typically ran around 100%
> >
> > · We’ve also tried turning off garbage free mode and that made
> > things worse as the CPU was around 120% and caused us to not meet our
> SLA’s
> >
> > It important to note that this is a REST Api using Jersey and typically
> > responds in about in under 50ms on a per request so its high volume, but
> > the logging level is WARN or higher except for our single performance 
log
> > record which is written once per request using the lambda base approach.
> >
> > Our next test is going to be switching to all ASYNC loggers and see what
> > effect that has, but I guess the general question is, has anyone else
> seen
> > this?  Any thoughts?
> >
> > Ron DiFrango
> > 
> >
> > The information contained in this e-mail is confidential and/or
> > proprietary to Capital One and/or its affiliates and may only be used
> > solely in performance of work or services for Capital One. The
> information
> > transmitted herewith is intended only for use by the individual or 
entity
> > to which it is addressed. If the reader of this message is not the
> intended
> > recipient, you are hereby notified that any review, retransmission,
> > dissemination, distribution, copying or other use of, or taking of any
> > action in reliance upon this information is strictly prohibited. If you
> > have received this communication in error, please contact the sender and
> > delete the material from your computer.
> >
>



-- 
Matt Sicker 




The information contained in this e-mail is confidential and/or proprietary to 
Capital One and/or its affiliates and may only be used solely in performance of 
work or services for Capital One. The information transmitted herewith is 
intended only for use by the individual or entity to which it is addressed. If 
the reader of this message is not the intended recipient, you are hereby 
notified that any review, retransmission, dissemination, distribution, copying 
or other use of, or taking of any action in reliance upon this information is 
strictly prohibited. If you have received this communication in error, please 
contact the sender and delete the material from your computer.

-
To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-user-h...@logging.apache.org



Re: CPU Utilization: log4j 2.6 and great

2017-02-13 Thread Matt Sicker
What server environment are you running this in?

On 13 February 2017 at 09:19, Remko Popma  wrote:

> Ron,
>
> We haven't heard of any issues like you describe.
> Have you tried running your application with Java Flight Recorder
>  runtime-guide/run.htm#JFRUH176>?
> This should help diagnose what is going on.
>
> Remko
>
>
>
> On Mon, Feb 13, 2017 at 9:59 PM, DiFrango, Ronald <
> ronald.difra...@capitalone.com> wrote:
>
> > All,
> >
> > We recently upgrade to 2.6 and noticed a dramatic increase in CPU and
> > Thread utilization that seems to be tied to the new “garbage free” mode
> of
> > log4j  2.6.  Here’s some of the baseline numbers:
> >
> >
> > · Log4j 2.5: CPU typically ran around 25%
> >
> > · Log4j 2.6: CPU typically ran around 75%
> >
> > · Log4j 2.6.2+: CPU typically ran around 100%
> >
> > · We’ve also tried turning off garbage free mode and that made
> > things worse as the CPU was around 120% and caused us to not meet our
> SLA’s
> >
> > It important to note that this is a REST Api using Jersey and typically
> > responds in about in under 50ms on a per request so its high volume, but
> > the logging level is WARN or higher except for our single performance log
> > record which is written once per request using the lambda base approach.
> >
> > Our next test is going to be switching to all ASYNC loggers and see what
> > effect that has, but I guess the general question is, has anyone else
> seen
> > this?  Any thoughts?
> >
> > Ron DiFrango
> > 
> >
> > The information contained in this e-mail is confidential and/or
> > proprietary to Capital One and/or its affiliates and may only be used
> > solely in performance of work or services for Capital One. The
> information
> > transmitted herewith is intended only for use by the individual or entity
> > to which it is addressed. If the reader of this message is not the
> intended
> > recipient, you are hereby notified that any review, retransmission,
> > dissemination, distribution, copying or other use of, or taking of any
> > action in reliance upon this information is strictly prohibited. If you
> > have received this communication in error, please contact the sender and
> > delete the material from your computer.
> >
>



-- 
Matt Sicker 


Re: CPU Utilization: log4j 2.6 and great

2017-02-13 Thread Remko Popma
Ron,

We haven't heard of any issues like you describe.
Have you tried running your application with Java Flight Recorder
?
This should help diagnose what is going on.

Remko



On Mon, Feb 13, 2017 at 9:59 PM, DiFrango, Ronald <
ronald.difra...@capitalone.com> wrote:

> All,
>
> We recently upgrade to 2.6 and noticed a dramatic increase in CPU and
> Thread utilization that seems to be tied to the new “garbage free” mode of
> log4j  2.6.  Here’s some of the baseline numbers:
>
>
> · Log4j 2.5: CPU typically ran around 25%
>
> · Log4j 2.6: CPU typically ran around 75%
>
> · Log4j 2.6.2+: CPU typically ran around 100%
>
> · We’ve also tried turning off garbage free mode and that made
> things worse as the CPU was around 120% and caused us to not meet our SLA’s
>
> It important to note that this is a REST Api using Jersey and typically
> responds in about in under 50ms on a per request so its high volume, but
> the logging level is WARN or higher except for our single performance log
> record which is written once per request using the lambda base approach.
>
> Our next test is going to be switching to all ASYNC loggers and see what
> effect that has, but I guess the general question is, has anyone else seen
> this?  Any thoughts?
>
> Ron DiFrango
> 
>
> The information contained in this e-mail is confidential and/or
> proprietary to Capital One and/or its affiliates and may only be used
> solely in performance of work or services for Capital One. The information
> transmitted herewith is intended only for use by the individual or entity
> to which it is addressed. If the reader of this message is not the intended
> recipient, you are hereby notified that any review, retransmission,
> dissemination, distribution, copying or other use of, or taking of any
> action in reliance upon this information is strictly prohibited. If you
> have received this communication in error, please contact the sender and
> delete the material from your computer.
>