Re: rolled log file continues to grow unbound

2016-03-31 Thread Ralph Goers
Do you have multiple web apps in the same Tomcat using the same log4j 
configuration?  You should never (even with Log4j 2) have two applications use 
the rolling file appender and have each try to write to the same files. What 
you are experiencing is exactly what will happen.

Ralph

> On Mar 31, 2016, at 4:51 PM, Simon Chan  wrote:
> 
> Hi Ron,
> 
> You are right! fd 1 and 2 are blocking log4j from closing the log file
> properly.
> 
> tomcat starts the Java process with: java -Dblah... >> catalina.out 2>&1  &
> 
> I wonder how we did not notice this in the past years.
> 
> Thanks again
> Simon
> 
> On Thu, Mar 31, 2016 at 2:38 PM, Ron Gonzalez  wrote:
> 
>> My first inclination is that you shouldn't have two locks on the same file
>> from the same process.  That points to multiple writers independently
>> locking the file.  This means when one tries to roll the other will still
>> retain the lock on the file blocking the other one.
>> 
>> You should identify whether your application code has a handle leak or
>> whether different components are instantiating multiple versions of log4j
>> which both want to retain locks to the same file.
>> 
>> A thread dump from the jvm when the issue is happening should help isolate
>> it.
>> 
>> HTH
>> 
>> Ron
>> On Mar 31, 2016 4:14 PM, "Simon Chan"  wrote:
>> 
>>> Greetings,
>>> 
>>> Hope someone has heard of the problem below or can suggest workarounds or
>>> how to debug this thing.
>>> 
>>> Problem:
>>> ===
>>> - After rolling, the old (rotated) log file continues to pick up entries
>>> and grows.
>>>  The new log file picks up WARN and INFO level logs. The old/rolled file
>>> picks
>>>  up ERROR level logs.
>>> - After max rotations, the filename no longer visible to "ls". But the
>> Java
>>> process
>>>  still has 2 file descriptors pointing at the deleted file. The file
>> will
>>> keep
>>>  growing until it eats up all disk space.
>>> - Only a handful systems have this problem. We have about 20 systems
>>>  in the lab and most of them rotate correctly.
>>>  One system has this bug happening 100% of the time, meaning after
>>> restarting the
>>>  Java process, the rotation problem appears at the first rollover.
>>>  It is our stress test system and generates quit a bit of log entries
>>> (busy but not
>>>  ridiculously busy).
>>> - 3 other systems entered the above state after moderate effort to
>>>  filled the log file to trigger rolling.
>>> 
>>> Other Symtoms:
>>> =
>>> Output from the command "ls -l /proc//fd | grep catalina.out":
>>> Mar 31 11:21 1 -> /var/log/tomcat/catalina.out.1
>>> Mar 31 11:21 111 -> /var/log/tomcat/catalina.out
>>> Mar 31 11:21 2 -> /var/log/tomcat/catalina.out.1
>>> 
>>> We have another Java process using identical log4j properties file
>>> (except the log filename part). This process has no rotation problem.
>>> "lsof | grep log4j" output shows this other process has handles
>>> to log4j-1.2.15.jar. But the tomcat process which has rotation
>>> problem does not?! Not sure if this means anything.
>>> 
>>> 
>>> History and Environment:
>>> ===
>>> We have been using log4j since 2012 without any issues. There are about
>> 200
>>> customers using this configuration:
>>> 
>>> - CentOS 6.4, 6.5 and 6.6 (systems upgrade to latest CentOS releases)
>>> - Java 6 (openjdk)
>>> - tomcat 6.0.32
>>> - log4j 2.15 (base on the filename log4j-1.2.15.jar)
>>> - Dell server with 16G RAM and 12 CPU cores (2 sockets X 6 cores)
>>> 
>>> This problem appears recently when we upgrade our system to the
>> followings:
>>> - CentOS 6.7
>>> - Java 8 (currently java-1.8.0-openjdk-1.8.0.77-0.b03)
>>> - tomcat 6.0.45 (at one point the stress test system used 6.0.32 and
>>> already
>>>  exhibited the rotation problem)
>>> - log4j 2.15
>>> 
>>> log4j.properties:
>>> 
>>> log4j.rootLogger=WARN, R
>>> 
>>> ##rolling appender
>>> log4j.appender.R=org.apache.log4j.RollingFileAppender
>>> log4j.appender.R.File=${catalina.home}/logs/catalina.out
>>> log4j.appender.R.MaxFileSize=10MB
>>> log4j.appender.R.MaxBackupIndex=10
>>> log4j.appender.R.layout=org.apache.log4j.PatternLayout
>>> log4j.appender.R.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
>>> 
>>> ## logging levels
>>> 
>>> 
>> log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost]=WARN
>>> log4j.logger.org.apache.catalina.core=WARN
>>> log4j.logger.org.apache.catalina.session=WARN
>>> log4j.logger.com.ourcompany=INFO
>>> 
>>> tomcat config
>>> ==
>>> Probably relevant but there are multiple conf files and web.xml and I
>> don't
>>> know which involve logging.
>>> 
>>> Regards,
>>> Simon
>>> 
>> 



-
To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-user-h...@logging.apache.org



Re: rolled log file continues to grow unbound

2016-03-31 Thread Simon Chan
Hi Ron,

You are right! fd 1 and 2 are blocking log4j from closing the log file
properly.

tomcat starts the Java process with: java -Dblah... >> catalina.out 2>&1  &

I wonder how we did not notice this in the past years.

Thanks again
Simon

On Thu, Mar 31, 2016 at 2:38 PM, Ron Gonzalez  wrote:

> My first inclination is that you shouldn't have two locks on the same file
> from the same process.  That points to multiple writers independently
> locking the file.  This means when one tries to roll the other will still
> retain the lock on the file blocking the other one.
>
> You should identify whether your application code has a handle leak or
> whether different components are instantiating multiple versions of log4j
> which both want to retain locks to the same file.
>
> A thread dump from the jvm when the issue is happening should help isolate
> it.
>
> HTH
>
> Ron
> On Mar 31, 2016 4:14 PM, "Simon Chan"  wrote:
>
> > Greetings,
> >
> > Hope someone has heard of the problem below or can suggest workarounds or
> > how to debug this thing.
> >
> > Problem:
> > ===
> > - After rolling, the old (rotated) log file continues to pick up entries
> > and grows.
> >   The new log file picks up WARN and INFO level logs. The old/rolled file
> > picks
> >   up ERROR level logs.
> > - After max rotations, the filename no longer visible to "ls". But the
> Java
> > process
> >   still has 2 file descriptors pointing at the deleted file. The file
> will
> > keep
> >   growing until it eats up all disk space.
> > - Only a handful systems have this problem. We have about 20 systems
> >   in the lab and most of them rotate correctly.
> >   One system has this bug happening 100% of the time, meaning after
> > restarting the
> >   Java process, the rotation problem appears at the first rollover.
> >   It is our stress test system and generates quit a bit of log entries
> > (busy but not
> >   ridiculously busy).
> > - 3 other systems entered the above state after moderate effort to
> >   filled the log file to trigger rolling.
> >
> > Other Symtoms:
> > =
> > Output from the command "ls -l /proc//fd | grep catalina.out":
> > Mar 31 11:21 1 -> /var/log/tomcat/catalina.out.1
> > Mar 31 11:21 111 -> /var/log/tomcat/catalina.out
> > Mar 31 11:21 2 -> /var/log/tomcat/catalina.out.1
> >
> > We have another Java process using identical log4j properties file
> > (except the log filename part). This process has no rotation problem.
> > "lsof | grep log4j" output shows this other process has handles
> > to log4j-1.2.15.jar. But the tomcat process which has rotation
> > problem does not?! Not sure if this means anything.
> >
> >
> > History and Environment:
> > ===
> > We have been using log4j since 2012 without any issues. There are about
> 200
> > customers using this configuration:
> >
> > - CentOS 6.4, 6.5 and 6.6 (systems upgrade to latest CentOS releases)
> > - Java 6 (openjdk)
> > - tomcat 6.0.32
> > - log4j 2.15 (base on the filename log4j-1.2.15.jar)
> > - Dell server with 16G RAM and 12 CPU cores (2 sockets X 6 cores)
> >
> > This problem appears recently when we upgrade our system to the
> followings:
> > - CentOS 6.7
> > - Java 8 (currently java-1.8.0-openjdk-1.8.0.77-0.b03)
> > - tomcat 6.0.45 (at one point the stress test system used 6.0.32 and
> > already
> >   exhibited the rotation problem)
> > - log4j 2.15
> >
> > log4j.properties:
> > 
> > log4j.rootLogger=WARN, R
> >
> > ##rolling appender
> > log4j.appender.R=org.apache.log4j.RollingFileAppender
> > log4j.appender.R.File=${catalina.home}/logs/catalina.out
> > log4j.appender.R.MaxFileSize=10MB
> > log4j.appender.R.MaxBackupIndex=10
> > log4j.appender.R.layout=org.apache.log4j.PatternLayout
> > log4j.appender.R.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
> >
> > ## logging levels
> >
> >
> log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost]=WARN
> > log4j.logger.org.apache.catalina.core=WARN
> > log4j.logger.org.apache.catalina.session=WARN
> > log4j.logger.com.ourcompany=INFO
> >
> > tomcat config
> > ==
> > Probably relevant but there are multiple conf files and web.xml and I
> don't
> > know which involve logging.
> >
> > Regards,
> > Simon
> >
>


Re: rolled log file continues to grow unbound

2016-03-31 Thread Remko Popma
Simon,

To avoid confusion later on, the version of log4j you are using is 1.2.15. The 
leading "1" is important because it means you're using the previous generation 
of log4j, which is EOL (end of life) now. If you're doing regression testing to 
upgrade to Java 8 anyway, may I suggest you upgrade to the latest version of 
log4j (currently 2.5) as well? 

The previous generation of log4j has known issues. I don't know if the issue 
you mention is one of them, but I would be surprised if this happened in log4j2 
as well. (And if it did we'd be able to fix it.)

Best regards,
Remko

Sent from my iPhone

> On 2016/04/01, at 6:38, Ron Gonzalez  wrote:
> 
> My first inclination is that you shouldn't have two locks on the same file
> from the same process.  That points to multiple writers independently
> locking the file.  This means when one tries to roll the other will still
> retain the lock on the file blocking the other one.
> 
> You should identify whether your application code has a handle leak or
> whether different components are instantiating multiple versions of log4j
> which both want to retain locks to the same file.
> 
> A thread dump from the jvm when the issue is happening should help isolate
> it.
> 
> HTH
> 
> Ron
>> On Mar 31, 2016 4:14 PM, "Simon Chan"  wrote:
>> 
>> Greetings,
>> 
>> Hope someone has heard of the problem below or can suggest workarounds or
>> how to debug this thing.
>> 
>> Problem:
>> ===
>> - After rolling, the old (rotated) log file continues to pick up entries
>> and grows.
>>  The new log file picks up WARN and INFO level logs. The old/rolled file
>> picks
>>  up ERROR level logs.
>> - After max rotations, the filename no longer visible to "ls". But the Java
>> process
>>  still has 2 file descriptors pointing at the deleted file. The file will
>> keep
>>  growing until it eats up all disk space.
>> - Only a handful systems have this problem. We have about 20 systems
>>  in the lab and most of them rotate correctly.
>>  One system has this bug happening 100% of the time, meaning after
>> restarting the
>>  Java process, the rotation problem appears at the first rollover.
>>  It is our stress test system and generates quit a bit of log entries
>> (busy but not
>>  ridiculously busy).
>> - 3 other systems entered the above state after moderate effort to
>>  filled the log file to trigger rolling.
>> 
>> Other Symtoms:
>> =
>> Output from the command "ls -l /proc//fd | grep catalina.out":
>> Mar 31 11:21 1 -> /var/log/tomcat/catalina.out.1
>> Mar 31 11:21 111 -> /var/log/tomcat/catalina.out
>> Mar 31 11:21 2 -> /var/log/tomcat/catalina.out.1
>> 
>> We have another Java process using identical log4j properties file
>> (except the log filename part). This process has no rotation problem.
>> "lsof | grep log4j" output shows this other process has handles
>> to log4j-1.2.15.jar. But the tomcat process which has rotation
>> problem does not?! Not sure if this means anything.
>> 
>> 
>> History and Environment:
>> ===
>> We have been using log4j since 2012 without any issues. There are about 200
>> customers using this configuration:
>> 
>> - CentOS 6.4, 6.5 and 6.6 (systems upgrade to latest CentOS releases)
>> - Java 6 (openjdk)
>> - tomcat 6.0.32
>> - log4j 2.15 (base on the filename log4j-1.2.15.jar)
>> - Dell server with 16G RAM and 12 CPU cores (2 sockets X 6 cores)
>> 
>> This problem appears recently when we upgrade our system to the followings:
>> - CentOS 6.7
>> - Java 8 (currently java-1.8.0-openjdk-1.8.0.77-0.b03)
>> - tomcat 6.0.45 (at one point the stress test system used 6.0.32 and
>> already
>>  exhibited the rotation problem)
>> - log4j 2.15
>> 
>> log4j.properties:
>> 
>> log4j.rootLogger=WARN, R
>> 
>> ##rolling appender
>> log4j.appender.R=org.apache.log4j.RollingFileAppender
>> log4j.appender.R.File=${catalina.home}/logs/catalina.out
>> log4j.appender.R.MaxFileSize=10MB
>> log4j.appender.R.MaxBackupIndex=10
>> log4j.appender.R.layout=org.apache.log4j.PatternLayout
>> log4j.appender.R.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
>> 
>> ## logging levels
>> 
>> log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost]=WARN
>> log4j.logger.org.apache.catalina.core=WARN
>> log4j.logger.org.apache.catalina.session=WARN
>> log4j.logger.com.ourcompany=INFO
>> 
>> tomcat config
>> ==
>> Probably relevant but there are multiple conf files and web.xml and I don't
>> know which involve logging.
>> 
>> Regards,
>> Simon
>> 

-
To unsubscribe, e-mail: log4j-user-unsubscr...@logging.apache.org
For additional commands, e-mail: log4j-user-h...@logging.apache.org



Re: rolled log file continues to grow unbound

2016-03-31 Thread Ron Gonzalez
My first inclination is that you shouldn't have two locks on the same file
from the same process.  That points to multiple writers independently
locking the file.  This means when one tries to roll the other will still
retain the lock on the file blocking the other one.

You should identify whether your application code has a handle leak or
whether different components are instantiating multiple versions of log4j
which both want to retain locks to the same file.

A thread dump from the jvm when the issue is happening should help isolate
it.

HTH

Ron
On Mar 31, 2016 4:14 PM, "Simon Chan"  wrote:

> Greetings,
>
> Hope someone has heard of the problem below or can suggest workarounds or
> how to debug this thing.
>
> Problem:
> ===
> - After rolling, the old (rotated) log file continues to pick up entries
> and grows.
>   The new log file picks up WARN and INFO level logs. The old/rolled file
> picks
>   up ERROR level logs.
> - After max rotations, the filename no longer visible to "ls". But the Java
> process
>   still has 2 file descriptors pointing at the deleted file. The file will
> keep
>   growing until it eats up all disk space.
> - Only a handful systems have this problem. We have about 20 systems
>   in the lab and most of them rotate correctly.
>   One system has this bug happening 100% of the time, meaning after
> restarting the
>   Java process, the rotation problem appears at the first rollover.
>   It is our stress test system and generates quit a bit of log entries
> (busy but not
>   ridiculously busy).
> - 3 other systems entered the above state after moderate effort to
>   filled the log file to trigger rolling.
>
> Other Symtoms:
> =
> Output from the command "ls -l /proc//fd | grep catalina.out":
> Mar 31 11:21 1 -> /var/log/tomcat/catalina.out.1
> Mar 31 11:21 111 -> /var/log/tomcat/catalina.out
> Mar 31 11:21 2 -> /var/log/tomcat/catalina.out.1
>
> We have another Java process using identical log4j properties file
> (except the log filename part). This process has no rotation problem.
> "lsof | grep log4j" output shows this other process has handles
> to log4j-1.2.15.jar. But the tomcat process which has rotation
> problem does not?! Not sure if this means anything.
>
>
> History and Environment:
> ===
> We have been using log4j since 2012 without any issues. There are about 200
> customers using this configuration:
>
> - CentOS 6.4, 6.5 and 6.6 (systems upgrade to latest CentOS releases)
> - Java 6 (openjdk)
> - tomcat 6.0.32
> - log4j 2.15 (base on the filename log4j-1.2.15.jar)
> - Dell server with 16G RAM and 12 CPU cores (2 sockets X 6 cores)
>
> This problem appears recently when we upgrade our system to the followings:
> - CentOS 6.7
> - Java 8 (currently java-1.8.0-openjdk-1.8.0.77-0.b03)
> - tomcat 6.0.45 (at one point the stress test system used 6.0.32 and
> already
>   exhibited the rotation problem)
> - log4j 2.15
>
> log4j.properties:
> 
> log4j.rootLogger=WARN, R
>
> ##rolling appender
> log4j.appender.R=org.apache.log4j.RollingFileAppender
> log4j.appender.R.File=${catalina.home}/logs/catalina.out
> log4j.appender.R.MaxFileSize=10MB
> log4j.appender.R.MaxBackupIndex=10
> log4j.appender.R.layout=org.apache.log4j.PatternLayout
> log4j.appender.R.layout.ConversionPattern=%d [%t] %-5p %c - %m%n
>
> ## logging levels
>
> log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost]=WARN
> log4j.logger.org.apache.catalina.core=WARN
> log4j.logger.org.apache.catalina.session=WARN
> log4j.logger.com.ourcompany=INFO
>
> tomcat config
> ==
> Probably relevant but there are multiple conf files and web.xml and I don't
> know which involve logging.
>
> Regards,
> Simon
>


rolled log file continues to grow unbound

2016-03-31 Thread Simon Chan
Greetings,

Hope someone has heard of the problem below or can suggest workarounds or
how to debug this thing.

Problem:
===
- After rolling, the old (rotated) log file continues to pick up entries
and grows.
  The new log file picks up WARN and INFO level logs. The old/rolled file
picks
  up ERROR level logs.
- After max rotations, the filename no longer visible to "ls". But the Java
process
  still has 2 file descriptors pointing at the deleted file. The file will
keep
  growing until it eats up all disk space.
- Only a handful systems have this problem. We have about 20 systems
  in the lab and most of them rotate correctly.
  One system has this bug happening 100% of the time, meaning after
restarting the
  Java process, the rotation problem appears at the first rollover.
  It is our stress test system and generates quit a bit of log entries
(busy but not
  ridiculously busy).
- 3 other systems entered the above state after moderate effort to
  filled the log file to trigger rolling.

Other Symtoms:
=
Output from the command "ls -l /proc//fd | grep catalina.out":
Mar 31 11:21 1 -> /var/log/tomcat/catalina.out.1
Mar 31 11:21 111 -> /var/log/tomcat/catalina.out
Mar 31 11:21 2 -> /var/log/tomcat/catalina.out.1

We have another Java process using identical log4j properties file
(except the log filename part). This process has no rotation problem.
"lsof | grep log4j" output shows this other process has handles
to log4j-1.2.15.jar. But the tomcat process which has rotation
problem does not?! Not sure if this means anything.


History and Environment:
===
We have been using log4j since 2012 without any issues. There are about 200
customers using this configuration:

- CentOS 6.4, 6.5 and 6.6 (systems upgrade to latest CentOS releases)
- Java 6 (openjdk)
- tomcat 6.0.32
- log4j 2.15 (base on the filename log4j-1.2.15.jar)
- Dell server with 16G RAM and 12 CPU cores (2 sockets X 6 cores)

This problem appears recently when we upgrade our system to the followings:
- CentOS 6.7
- Java 8 (currently java-1.8.0-openjdk-1.8.0.77-0.b03)
- tomcat 6.0.45 (at one point the stress test system used 6.0.32 and
already
  exhibited the rotation problem)
- log4j 2.15

log4j.properties:

log4j.rootLogger=WARN, R

##rolling appender
log4j.appender.R=org.apache.log4j.RollingFileAppender
log4j.appender.R.File=${catalina.home}/logs/catalina.out
log4j.appender.R.MaxFileSize=10MB
log4j.appender.R.MaxBackupIndex=10
log4j.appender.R.layout=org.apache.log4j.PatternLayout
log4j.appender.R.layout.ConversionPattern=%d [%t] %-5p %c - %m%n

## logging levels
log4j.logger.org.apache.catalina.core.ContainerBase.[Catalina].[localhost]=WARN
log4j.logger.org.apache.catalina.core=WARN
log4j.logger.org.apache.catalina.session=WARN
log4j.logger.com.ourcompany=INFO

tomcat config
==
Probably relevant but there are multiple conf files and web.xml and I don't
know which involve logging.

Regards,
Simon