Build failed in Jenkins: Log4j 2 2.x #3604

2018-08-28 Thread Apache Jenkins Server
See 

--
[...truncated 1.15 MB...]
Uploaded: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb2/2.11.2-SNAPSHOT/log4j-mongodb2-2.11.2-20180828.225309-32.pom
 (8 KB at 3.6 KB/sec)
Downloading: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb2/maven-metadata.xml
Downloaded: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb2/maven-metadata.xml
 (440 B at 0.4 KB/sec)
Uploading: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb2/2.11.2-SNAPSHOT/maven-metadata.xml
Uploaded: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb2/2.11.2-SNAPSHOT/maven-metadata.xml
 (2 KB at 0.6 KB/sec)
Uploading: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb2/maven-metadata.xml
Uploaded: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb2/maven-metadata.xml
 (440 B at 0.2 KB/sec)
Deploying the main artifact log4j-mongodb2-2.11.2-SNAPSHOT-sources.jar
Uploading: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb2/2.11.2-SNAPSHOT/log4j-mongodb2-2.11.2-20180828.225309-32-sources.jar
Uploaded: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb2/2.11.2-SNAPSHOT/log4j-mongodb2-2.11.2-20180828.225309-32-sources.jar
 (16 KB at 6.6 KB/sec)
Uploading: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb2/2.11.2-SNAPSHOT/maven-metadata.xml
Uploaded: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb2/2.11.2-SNAPSHOT/maven-metadata.xml
 (2 KB at 0.7 KB/sec)
Deploying the main artifact log4j-mongodb2-2.11.2-SNAPSHOT-test-sources.jar
Uploading: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb2/2.11.2-SNAPSHOT/log4j-mongodb2-2.11.2-20180828.225309-32-test-sources.jar
Uploaded: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb2/2.11.2-SNAPSHOT/log4j-mongodb2-2.11.2-20180828.225309-32-test-sources.jar
 (23 KB at 9.2 KB/sec)
Uploading: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb2/2.11.2-SNAPSHOT/maven-metadata.xml
Uploaded: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb2/2.11.2-SNAPSHOT/maven-metadata.xml
 (2 KB at 0.7 KB/sec)
[INFO] Deployment in 
https://repository.apache.org/content/repositories/snapshots 
(id=apache.snapshots.https,uniqueVersion=true)
Deploying the main artifact log4j-mongodb3-2.11.2-SNAPSHOT.jar
Downloading: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb3/2.11.2-SNAPSHOT/maven-metadata.xml
Downloaded: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb3/2.11.2-SNAPSHOT/maven-metadata.xml
 (2 KB at 0.7 KB/sec)
Uploading: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb3/2.11.2-SNAPSHOT/log4j-mongodb3-2.11.2-20180828.225329-32.jar
Uploaded: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb3/2.11.2-SNAPSHOT/log4j-mongodb3-2.11.2-20180828.225329-32.jar
 (22 KB at 11.2 KB/sec)
Uploading: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb3/2.11.2-SNAPSHOT/log4j-mongodb3-2.11.2-20180828.225329-32.pom
Uploaded: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb3/2.11.2-SNAPSHOT/log4j-mongodb3-2.11.2-20180828.225329-32.pom
 (8 KB at 3.3 KB/sec)
Downloading: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb3/maven-metadata.xml
Downloaded: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb3/maven-metadata.xml
 (440 B at 0.4 KB/sec)
Uploading: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb3/2.11.2-SNAPSHOT/maven-metadata.xml
Uploaded: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb3/2.11.2-SNAPSHOT/maven-metadata.xml
 (2 KB at 0.6 KB/sec)
Uploading: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb3/maven-metadata.xml
Uploaded: 
https://repository.apache.org/content/repositories/snapshots/org/apache/logging/log4j/log4j-mongodb3/maven-metadata.xml
 (440 B at 0.2 KB/sec)
Deploying the main artifact log4j-mongodb3-2.11.2-SNAPSH

[jira] [Resolved] (LOG4J2-2422) Handle some unchecked exceptions while loading plugins

2018-08-28 Thread Gary Gregory (JIRA)


 [ 
https://issues.apache.org/jira/browse/LOG4J2-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Gregory resolved LOG4J2-2422.
--
   Resolution: Fixed
 Assignee: Gary Gregory
Fix Version/s: 2.11.2
   3.0.0

Fixed in {{release-2.x}} and {{master}}. Please verify and close this ticket.

> Handle some unchecked exceptions while loading plugins
> --
>
> Key: LOG4J2-2422
> URL: https://issues.apache.org/jira/browse/LOG4J2-2422
> Project: Log4j 2
>  Issue Type: Bug
>  Components: Plugins
>Affects Versions: 2.8.2
>Reporter: rswart
>Assignee: Gary Gregory
>Priority: Major
> Fix For: 3.0.0, 2.11.2
>
>
> The PluginRegistry handles [ClassNotFoundException 
> |[https://i/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]|https://github.com/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]]
>  but does not handle unchecked exceptions like NoClassDefFoundError. As a 
> result applications may not start when loading of a plugin fails with an 
> unchecked exception.
>  
> Here is the scenario we ran into:
>  
> We use [logstash-gelf|http://logging.paluch.biz/] in a standardized Tomcat 
> docker images to send Java Util Logging (as used by Tomcat) to Graylog. To do 
> this we add the logstash-gelf jar to the $CATALINA_HOME/lib directory, 
> effectively placing it on Tomcat's common loader classpath. In essence there 
> is no log4j involved, but the logstash-gelf jar contains integrations for 
> various logging frameworks, including a log4j2 appender.
> When a webapplication that is deployed on this Tomcat instance uses log4j2 as 
> logging framework the logstash-gelf appender is found during plugin scanning 
> and the PluginRegistry tries to load it (even if the appender is not used in 
> the log4j configuration). The logstash-gelf plugin is not loaded via the 
> webapplication classloader, but through the parent common loader. It can find 
> the plugin class but not the dependent log4j2 classes as they are only on the 
> classpath of the webapplication classloader:
>  
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/logging/log4j/core/appender/AbstractAppender
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (LOG4J2-2422) Handle some unchecked exceptions while loading plugins

2018-08-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LOG4J2-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16595620#comment-16595620
 ] 

ASF subversion and git services commented on LOG4J2-2422:
-

Commit 7333573d9dd34ec9a0664f2115a0fce432783433 in logging-log4j2's branch 
refs/heads/master from [~garydgregory]
[ https://git-wip-us.apache.org/repos/asf?p=logging-log4j2.git;h=7333573 ]

[LOG4J2-2422] Handle some unchecked exceptions while loading plugins.

> Handle some unchecked exceptions while loading plugins
> --
>
> Key: LOG4J2-2422
> URL: https://issues.apache.org/jira/browse/LOG4J2-2422
> Project: Log4j 2
>  Issue Type: Bug
>  Components: Plugins
>Affects Versions: 2.8.2
>Reporter: rswart
>Priority: Major
>
> The PluginRegistry handles [ClassNotFoundException 
> |[https://i/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]|https://github.com/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]]
>  but does not handle unchecked exceptions like NoClassDefFoundError. As a 
> result applications may not start when loading of a plugin fails with an 
> unchecked exception.
>  
> Here is the scenario we ran into:
>  
> We use [logstash-gelf|http://logging.paluch.biz/] in a standardized Tomcat 
> docker images to send Java Util Logging (as used by Tomcat) to Graylog. To do 
> this we add the logstash-gelf jar to the $CATALINA_HOME/lib directory, 
> effectively placing it on Tomcat's common loader classpath. In essence there 
> is no log4j involved, but the logstash-gelf jar contains integrations for 
> various logging frameworks, including a log4j2 appender.
> When a webapplication that is deployed on this Tomcat instance uses log4j2 as 
> logging framework the logstash-gelf appender is found during plugin scanning 
> and the PluginRegistry tries to load it (even if the appender is not used in 
> the log4j configuration). The logstash-gelf plugin is not loaded via the 
> webapplication classloader, but through the parent common loader. It can find 
> the plugin class but not the dependent log4j2 classes as they are only on the 
> classpath of the webapplication classloader:
>  
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/logging/log4j/core/appender/AbstractAppender
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (LOG4J2-2422) Handle some unchecked exceptions while loading plugins

2018-08-28 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LOG4J2-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16595616#comment-16595616
 ] 

ASF subversion and git services commented on LOG4J2-2422:
-

Commit 4d76d3e346c34ba32e12199fd70fbf68a9766142 in logging-log4j2's branch 
refs/heads/release-2.x from [~garydgregory]
[ https://git-wip-us.apache.org/repos/asf?p=logging-log4j2.git;h=4d76d3e ]

[LOG4J2-2422] Handle some unchecked exceptions while loading plugins.

> Handle some unchecked exceptions while loading plugins
> --
>
> Key: LOG4J2-2422
> URL: https://issues.apache.org/jira/browse/LOG4J2-2422
> Project: Log4j 2
>  Issue Type: Bug
>  Components: Plugins
>Affects Versions: 2.8.2
>Reporter: rswart
>Priority: Major
>
> The PluginRegistry handles [ClassNotFoundException 
> |[https://i/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]|https://github.com/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]]
>  but does not handle unchecked exceptions like NoClassDefFoundError. As a 
> result applications may not start when loading of a plugin fails with an 
> unchecked exception.
>  
> Here is the scenario we ran into:
>  
> We use [logstash-gelf|http://logging.paluch.biz/] in a standardized Tomcat 
> docker images to send Java Util Logging (as used by Tomcat) to Graylog. To do 
> this we add the logstash-gelf jar to the $CATALINA_HOME/lib directory, 
> effectively placing it on Tomcat's common loader classpath. In essence there 
> is no log4j involved, but the logstash-gelf jar contains integrations for 
> various logging frameworks, including a log4j2 appender.
> When a webapplication that is deployed on this Tomcat instance uses log4j2 as 
> logging framework the logstash-gelf appender is found during plugin scanning 
> and the PluginRegistry tries to load it (even if the appender is not used in 
> the log4j configuration). The logstash-gelf plugin is not loaded via the 
> webapplication classloader, but through the parent common loader. It can find 
> the plugin class but not the dependent log4j2 classes as they are only on the 
> classpath of the webapplication classloader:
>  
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/logging/log4j/core/appender/AbstractAppender
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (LOG4J2-2422) Consider handling unchecked exceptions while loading plugins

2018-08-28 Thread Gary Gregory (JIRA)


[ 
https://issues.apache.org/jira/browse/LOG4J2-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16595615#comment-16595615
 ] 

Gary Gregory commented on LOG4J2-2422:
--

We can catch {{LinkageError}} instead of {{VerifyError}}.

> Consider handling unchecked exceptions while loading plugins
> 
>
> Key: LOG4J2-2422
> URL: https://issues.apache.org/jira/browse/LOG4J2-2422
> Project: Log4j 2
>  Issue Type: Bug
>  Components: Plugins
>Affects Versions: 2.8.2
>Reporter: rswart
>Priority: Major
>
> The PluginRegistry handles [ClassNotFoundException 
> |[https://i/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]|https://github.com/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]]
>  but does not handle unchecked exceptions like NoClassDefFoundError. As a 
> result applications may not start when loading of a plugin fails with an 
> unchecked exception.
>  
> Here is the scenario we ran into:
>  
> We use [logstash-gelf|http://logging.paluch.biz/] in a standardized Tomcat 
> docker images to send Java Util Logging (as used by Tomcat) to Graylog. To do 
> this we add the logstash-gelf jar to the $CATALINA_HOME/lib directory, 
> effectively placing it on Tomcat's common loader classpath. In essence there 
> is no log4j involved, but the logstash-gelf jar contains integrations for 
> various logging frameworks, including a log4j2 appender.
> When a webapplication that is deployed on this Tomcat instance uses log4j2 as 
> logging framework the logstash-gelf appender is found during plugin scanning 
> and the PluginRegistry tries to load it (even if the appender is not used in 
> the log4j configuration). The logstash-gelf plugin is not loaded via the 
> webapplication classloader, but through the parent common loader. It can find 
> the plugin class but not the dependent log4j2 classes as they are only on the 
> classpath of the webapplication classloader:
>  
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/logging/log4j/core/appender/AbstractAppender
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (LOG4J2-2422) Handling some unchecked exceptions while loading plugins

2018-08-28 Thread Gary Gregory (JIRA)


 [ 
https://issues.apache.org/jira/browse/LOG4J2-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Gregory updated LOG4J2-2422:
-
Summary: Handling some unchecked exceptions while loading plugins  (was: 
Consider handling unchecked exceptions while loading plugins)

> Handling some unchecked exceptions while loading plugins
> 
>
> Key: LOG4J2-2422
> URL: https://issues.apache.org/jira/browse/LOG4J2-2422
> Project: Log4j 2
>  Issue Type: Bug
>  Components: Plugins
>Affects Versions: 2.8.2
>Reporter: rswart
>Priority: Major
>
> The PluginRegistry handles [ClassNotFoundException 
> |[https://i/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]|https://github.com/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]]
>  but does not handle unchecked exceptions like NoClassDefFoundError. As a 
> result applications may not start when loading of a plugin fails with an 
> unchecked exception.
>  
> Here is the scenario we ran into:
>  
> We use [logstash-gelf|http://logging.paluch.biz/] in a standardized Tomcat 
> docker images to send Java Util Logging (as used by Tomcat) to Graylog. To do 
> this we add the logstash-gelf jar to the $CATALINA_HOME/lib directory, 
> effectively placing it on Tomcat's common loader classpath. In essence there 
> is no log4j involved, but the logstash-gelf jar contains integrations for 
> various logging frameworks, including a log4j2 appender.
> When a webapplication that is deployed on this Tomcat instance uses log4j2 as 
> logging framework the logstash-gelf appender is found during plugin scanning 
> and the PluginRegistry tries to load it (even if the appender is not used in 
> the log4j configuration). The logstash-gelf plugin is not loaded via the 
> webapplication classloader, but through the parent common loader. It can find 
> the plugin class but not the dependent log4j2 classes as they are only on the 
> classpath of the webapplication classloader:
>  
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/logging/log4j/core/appender/AbstractAppender
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (LOG4J2-2422) Handle some unchecked exceptions while loading plugins

2018-08-28 Thread Gary Gregory (JIRA)


 [ 
https://issues.apache.org/jira/browse/LOG4J2-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gary Gregory updated LOG4J2-2422:
-
Summary: Handle some unchecked exceptions while loading plugins  (was: 
Handling some unchecked exceptions while loading plugins)

> Handle some unchecked exceptions while loading plugins
> --
>
> Key: LOG4J2-2422
> URL: https://issues.apache.org/jira/browse/LOG4J2-2422
> Project: Log4j 2
>  Issue Type: Bug
>  Components: Plugins
>Affects Versions: 2.8.2
>Reporter: rswart
>Priority: Major
>
> The PluginRegistry handles [ClassNotFoundException 
> |[https://i/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]|https://github.com/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]]
>  but does not handle unchecked exceptions like NoClassDefFoundError. As a 
> result applications may not start when loading of a plugin fails with an 
> unchecked exception.
>  
> Here is the scenario we ran into:
>  
> We use [logstash-gelf|http://logging.paluch.biz/] in a standardized Tomcat 
> docker images to send Java Util Logging (as used by Tomcat) to Graylog. To do 
> this we add the logstash-gelf jar to the $CATALINA_HOME/lib directory, 
> effectively placing it on Tomcat's common loader classpath. In essence there 
> is no log4j involved, but the logstash-gelf jar contains integrations for 
> various logging frameworks, including a log4j2 appender.
> When a webapplication that is deployed on this Tomcat instance uses log4j2 as 
> logging framework the logstash-gelf appender is found during plugin scanning 
> and the PluginRegistry tries to load it (even if the appender is not used in 
> the log4j configuration). The logstash-gelf plugin is not loaded via the 
> webapplication classloader, but through the parent common loader. It can find 
> the plugin class but not the dependent log4j2 classes as they are only on the 
> classpath of the webapplication classloader:
>  
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/logging/log4j/core/appender/AbstractAppender
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (LOG4J2-2424) Process ID (pid) lookup

2018-08-28 Thread Simon Schneider (JIRA)


[ 
https://issues.apache.org/jira/browse/LOG4J2-2424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16595566#comment-16595566
 ] 

Simon Schneider commented on LOG4J2-2424:
-

True we are running on multiple hosts as well, however more important for us 
are multiple processes on the same host.

> Process ID (pid) lookup
> ---
>
> Key: LOG4J2-2424
> URL: https://issues.apache.org/jira/browse/LOG4J2-2424
> Project: Log4j 2
>  Issue Type: New Feature
>  Components: Lookups
>Affects Versions: 2.11.1
>Reporter: Simon Schneider
>Priority: Major
>
> Similar to LOG4J2-1884 it would be great to be able to use {{%pid}} as lookup 
> e.g. to seperate log files from different processes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (LOG4J2-2424) Process ID (pid) lookup

2018-08-28 Thread Ralph Goers (JIRA)


[ 
https://issues.apache.org/jira/browse/LOG4J2-2424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16595493#comment-16595493
 ] 

Ralph Goers edited comment on LOG4J2-2424 at 8/28/18 7:30 PM:
--

OK. I thought you were trying to separate log events somehow, not distinguish 
between log files.

You could also use the hostname, assuming they are running on different 
servers. Then again, to be unique you probably need the hostname and the pid. 
Be aware though, that process ids get reused.


was (Author: ralph.go...@dslextreme.com):
OK. I thought you were trying to separate log events somehow, not distinguish 
between log files.

You could also use the hostname.

> Process ID (pid) lookup
> ---
>
> Key: LOG4J2-2424
> URL: https://issues.apache.org/jira/browse/LOG4J2-2424
> Project: Log4j 2
>  Issue Type: New Feature
>  Components: Lookups
>Affects Versions: 2.11.1
>Reporter: Simon Schneider
>Priority: Major
>
> Similar to LOG4J2-1884 it would be great to be able to use {{%pid}} as lookup 
> e.g. to seperate log files from different processes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (LOG4J2-2424) Process ID (pid) lookup

2018-08-28 Thread Ralph Goers (JIRA)


[ 
https://issues.apache.org/jira/browse/LOG4J2-2424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16595493#comment-16595493
 ] 

Ralph Goers commented on LOG4J2-2424:
-

OK. I thought you were trying to separate log events somehow, not distinguish 
between log files.

You could also use the hostname.

> Process ID (pid) lookup
> ---
>
> Key: LOG4J2-2424
> URL: https://issues.apache.org/jira/browse/LOG4J2-2424
> Project: Log4j 2
>  Issue Type: New Feature
>  Components: Lookups
>Affects Versions: 2.11.1
>Reporter: Simon Schneider
>Priority: Major
>
> Similar to LOG4J2-1884 it would be great to be able to use {{%pid}} as lookup 
> e.g. to seperate log files from different processes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (LOG4J2-2424) Process ID (pid) lookup

2018-08-28 Thread Simon Schneider (JIRA)


[ 
https://issues.apache.org/jira/browse/LOG4J2-2424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16595475#comment-16595475
 ] 

Simon Schneider commented on LOG4J2-2424:
-

We are starting multiple instances of the same application (multiple JVMs) and 
are looking for a way to write logs into different files. The only current 
standard way that we found is to use timestamps, but with the timestamp it is 
not as easy to correlate the log file with the application.

Our workaround currently consists of writing the PID as a System property 
before initialising the logging system, however this is error prone for us, as 
it quite often happened that the logger accidently got initialized too early. 
We also have a solution that is based on a log4j2 Plugin extending 
AbstractLookup, however we would like to avoid all compile time dependencies to 
log4j2 since we are using slf4j as a facade.

> Process ID (pid) lookup
> ---
>
> Key: LOG4J2-2424
> URL: https://issues.apache.org/jira/browse/LOG4J2-2424
> Project: Log4j 2
>  Issue Type: New Feature
>  Components: Lookups
>Affects Versions: 2.11.1
>Reporter: Simon Schneider
>Priority: Major
>
> Similar to LOG4J2-1884 it would be great to be able to use {{%pid}} as lookup 
> e.g. to seperate log files from different processes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (LOG4J2-2424) Process ID (pid) lookup

2018-08-28 Thread Ralph Goers (JIRA)


[ 
https://issues.apache.org/jira/browse/LOG4J2-2424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16595355#comment-16595355
 ] 

Ralph Goers commented on LOG4J2-2424:
-

A pid lookup would return the current process id. I am not sure how that would 
help in separating log files. Can you explain more on what you are thinking?

> Process ID (pid) lookup
> ---
>
> Key: LOG4J2-2424
> URL: https://issues.apache.org/jira/browse/LOG4J2-2424
> Project: Log4j 2
>  Issue Type: New Feature
>  Components: Lookups
>Affects Versions: 2.11.1
>Reporter: Simon Schneider
>Priority: Major
>
> Similar to LOG4J2-1884 it would be great to be able to use {{%pid}} as lookup 
> e.g. to seperate log files from different processes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (LOG4J2-2424) Process ID (pid) lookup

2018-08-28 Thread Simon (JIRA)
Simon created LOG4J2-2424:
-

 Summary: Process ID (pid) lookup
 Key: LOG4J2-2424
 URL: https://issues.apache.org/jira/browse/LOG4J2-2424
 Project: Log4j 2
  Issue Type: New Feature
  Components: Lookups
Affects Versions: 2.11.1
Reporter: Simon


Similar to LOG4J2-1884 it would be great to be able to use {{%pid}} as lookup 
e.g. to seperate log files from different processes.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (LOG4J2-2423) Rolled files are not deleted when a date is used in the pattern

2018-08-28 Thread Ralph Goers (JIRA)


[ 
https://issues.apache.org/jira/browse/LOG4J2-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16595082#comment-16595082
 ] 

Ralph Goers commented on LOG4J2-2423:
-

The max files parameter limits the number of files that will be saved during a 
time-based rollover "window". So if you have max files set to 7 and a time 
pattern of 1 day then you will only have 7 files per day. If you want to limit 
the total number of files you either can't use a time based rollover or you 
need to use the delete action to clean things up.

> Rolled files are not deleted when a date is used in the pattern
> ---
>
> Key: LOG4J2-2423
> URL: https://issues.apache.org/jira/browse/LOG4J2-2423
> Project: Log4j 2
>  Issue Type: Bug
>  Components: Appenders
>Reporter: Carter Kozak
>Assignee: Carter Kozak
>Priority: Major
>
> In my appender definition I set 
> filePattern="app/log/trace.%d\{-MM-dd}-%i.log.gz"
> I would expect to see a maximum of 7 rolled trace logs, however I have 
> accumulated over 30.
> While running in a debugger, in AbstractRolloverStrategy.getEligibleFiles I 
> see filePattern set to "trace.2018-08-22-(\d+).log.\*". I would expect the 
> date to be replaced to something along the lines of 
> "trace.(\d+)\-(\d+)\-(\d+)\-(\d+).log.\*"
> Based on the documentation and javadoc this doesn't appear to be entirely 
> unexpected, however it is odd that based on the presence of a date in the 
> file pattern a default rolling file appender may create up to 7 total files, 
> or up to 7 files per date pattern minimum interval.
> I'm curious if this has been discussed elsewhere that I may have missed, or 
> if this is consistent with others expectation of DefaultRolloverStrategy. If 
> so I will update the documentation to be clearer around this point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (LOG4J2-2423) Rolled files are not deleted when a date is used in the pattern

2018-08-28 Thread Carter Kozak (JIRA)


 [ 
https://issues.apache.org/jira/browse/LOG4J2-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carter Kozak updated LOG4J2-2423:
-
Description: 
In my appender definition I set 
filePattern="app/log/trace.%d\{-MM-dd}-%i.log.gz"
I would expect to see a maximum of 7 rolled trace logs, however I have 
accumulated over 30.

While running in a debugger, in AbstractRolloverStrategy.getEligibleFiles I see 
filePattern set to "trace.2018-08-22-(\d+).log.\*". I would expect the date to 
be replaced to something along the lines of 
"trace.(\d+)\-(\d+)\-(\d+)\-(\d+).log.\*"

Based on the documentation and javadoc this doesn't appear to be entirely 
unexpected, however it is odd that based on the presence of a date in the file 
pattern a default rolling file appender may create up to 7 total files, or up 
to 7 files per date pattern minimum interval.

I'm curious if this has been discussed elsewhere that I may have missed, or if 
this is consistent with others expectation of DefaultRolloverStrategy. If so I 
will update the documentation to be clearer around this point.

  was:
In my appender definition I set 
filePattern="app/log/trace.%d\{-MM-dd}-%i.log.gz"
I would expect to see a maximum of 7 rolled trace logs, however I have 
accumulated over 30.

While running in a debugger, in AbstractRolloverStrategy.getEligibleFiles I see 
filePattern set to "trace.2018-08-22-(\d+).log.\*". I would expect the date to 
be replaced to something along the lines of 
"trace.(\d+)-(\d+)-(\d+)-(\d+).log.\*"

Based on the documentation and javadoc this doesn't appear to be entirely 
unexpected, however it is odd that based on the presence of a date in the file 
pattern a default rolling file appender may create up to 7 total files, or up 
to 7 files per date pattern minimum interval.

I'm curious if this has been discussed elsewhere that I may have missed, or if 
this is consistent with others expectation of DefaultRolloverStrategy. If so I 
will update the documentation to be clearer around this point.


> Rolled files are not deleted when a date is used in the pattern
> ---
>
> Key: LOG4J2-2423
> URL: https://issues.apache.org/jira/browse/LOG4J2-2423
> Project: Log4j 2
>  Issue Type: Bug
>  Components: Appenders
>Reporter: Carter Kozak
>Assignee: Carter Kozak
>Priority: Major
>
> In my appender definition I set 
> filePattern="app/log/trace.%d\{-MM-dd}-%i.log.gz"
> I would expect to see a maximum of 7 rolled trace logs, however I have 
> accumulated over 30.
> While running in a debugger, in AbstractRolloverStrategy.getEligibleFiles I 
> see filePattern set to "trace.2018-08-22-(\d+).log.\*". I would expect the 
> date to be replaced to something along the lines of 
> "trace.(\d+)\-(\d+)\-(\d+)\-(\d+).log.\*"
> Based on the documentation and javadoc this doesn't appear to be entirely 
> unexpected, however it is odd that based on the presence of a date in the 
> file pattern a default rolling file appender may create up to 7 total files, 
> or up to 7 files per date pattern minimum interval.
> I'm curious if this has been discussed elsewhere that I may have missed, or 
> if this is consistent with others expectation of DefaultRolloverStrategy. If 
> so I will update the documentation to be clearer around this point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (LOGCXX-500) Logging in Timing-Critical Applications

2018-08-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/LOGCXX-500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16594967#comment-16594967
 ] 

Thorsten Schöning commented on LOGCXX-500:
--

If you don't want to fix them, just leave the error messages here so one can 
have a look later.

> Logging in Timing-Critical Applications
> ---
>
> Key: LOGCXX-500
> URL: https://issues.apache.org/jira/browse/LOGCXX-500
> Project: Log4cxx
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 0.10.0
>Reporter: Thorsten Schöning
>Priority: Minor
> Attachments: config.xml, main.cpp, non_blocking.diff, 
> non_blocking_wo_debian_control.diff
>
>
> The following has been arrived on the mailing list, providing it here as well 
> mainly to additionally collect the given patches etc.:
> {quote}Hello All,
> I'd like to share some experience as well as some patches with regard 
> to using log4cxx in timing-critical application. First few words about 
> our requirements: it's a service which must generate some network 
> packets with up to hundred of microseconds precision. Thus, it's very 
> important to have predictable code timing. One can argue that log4cxx 
> is not very well suited for such applications, but surprisingly it 
> works pretty well after some light tuning.
> So, what were the issues?
> Basically from library user's point of view they looked the same: one 
> of a sudden logging done with LOG4CXX_DEBUG() macro could take 
> unexpectedly long time to complete. For example the same trace which 
> takes several μs for 99% of the time would take hundreds microseconds 
> and even few milliseconds sometimes. After further investigation this 
> has been traced down to few of the root-causes:
> 1. Asyns logger (which we have been using of course) has internal queue 
> to pass log entries to background disk-writer thread. This queue is 
> mutex-protected which might seem fine unless you think a little bit 
> more about it. First of all, someone calling LOG4CXX_DEBUG() to simply 
> put something into the log might not expect to be blocked inside 
> waiting for a mutex at all. Second point is that, although there were 
> measures taken to minimize time disk-thread holds that lock, 
> OS-schedulers often work in a way that thread which is blocked on a 
> mutex gets de-scheduled. With normal OS-scheduler quantum that means 
> that the logging thread can be preempted for milliseconds.
> 2. There are some mutexes protecting internal states of both loggers 
> and appenders. This means that two separate threads calling 
> LOG4CXX_DEBUG() can block each other. Even if they are using different 
> loggers they would block on appender! This has the same consequences 
> for execution timing and the performance as described above.
> 3. std::stringstream class constructor has some internal locks on it's 
> own. Unfortunately each MessageBuffer has it's own instance of this 
> class. And also unfortunately MessageBuffer is created inside 
> LOG4CXX_DEBUG() macro. There is optimization to not create stringstream 
> for logging simple strings, but as soon as your log statement has 
> single '<<' operator it's created.
> 4. Dynamic memory allocations. Unfortunately there are still quite few 
> of them even though memory pool is used in some other places. Thus, 
> hidden calls to new and malloc induce unpredictable delays.
> So, what we did to mitigate these problems?
> 1. Natural solution for this issue was to use atomic queue. There are 
> few of them available, but we made use of boost::lockfree::queue as it 
> can serve as a drop-in replacement allowing us to keep all present 
> functionality.
> 2. After looking more into the code it has appeared that two concurrent 
> calls to LOG4CXX_DEBUG() from within different threads are not harmful 
> because internal structures of logger and appender are not being 
> changed there. What only really requires protection is concurrency 
> between logging and configuring. Thus, we came to a solution - 
> read-write locks where logging calls act as readers and 
> configuration/exiting calls are writers. With such approach multiple 
> threads calling LOG4CXX_DEBUG() became free of any contention.
> 3. This problem also has one simple solution - make one static 
> std::stringstream object per thread using std::thread_local. 
> Unfortunately we found one drawback - thread_local memory is not 
> released if thread is not detached or joined. As there is some code 
> which does neither of this we made static stringstream a xml file 
> configuration option. Also, there could be an issue with using multiple 
> MessageBuffer instances from within single thread, but LOG4CXX_DEBUG() 
> is not doing that.
> 4. At this time we didn't do anything to address dynamic memory 
> allocation issue.
> So, if 

[jira] [Updated] (LOG4J2-2423) Rolled files are not deleted when a date is used in the pattern

2018-08-28 Thread Carter Kozak (JIRA)


 [ 
https://issues.apache.org/jira/browse/LOG4J2-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carter Kozak updated LOG4J2-2423:
-
Description: 
In my appender definition I set 
filePattern="app/log/trace.%d\{-MM-dd}-%i.log.gz"
I would expect to see a maximum of 7 rolled trace logs, however I have 
accumulated over 30.

While running in a debugger, in AbstractRolloverStrategy.getEligibleFiles I see 
filePattern set to "trace.2018-08-22-(\d+).log.\*". I would expect the date to 
be replaced to something along the lines of 
"trace.(\d+)-(\d+)-(\d+)-(\d+).log.\*"

Based on the documentation and javadoc this doesn't appear to be entirely 
unexpected, however it is odd that based on the presence of a date in the file 
pattern a default rolling file appender may create up to 7 total files, or up 
to 7 files per date pattern minimum interval.

I'm curious if this has been discussed elsewhere that I may have missed, or if 
this is consistent with others expectation of DefaultRolloverStrategy. If so I 
will update the documentation to be clearer around this point.

  was:
In my appender definition I set 
filePattern="app/log/trace.%d\{-MM-dd}-%i.log.gz"
I would expect to see a maximum of 7 rolled trace logs, however I have 
accumulated over 30.

While running in a debugger, in AbstractRolloverStrategy.getEligibleFiles I see 
filePattern set to "trace.2018-08-22-(\d+).log.*". I would expect the date to 
be replaced to something along the lines of 
"trace.(\d+)-(\d+)-(\d+)-(\d+).log.*"

Based on the documentation and javadoc this doesn't appear to be entirely 
unexpected, however it is odd that based on the presence of a date in the file 
pattern a default rolling file appender may create up to 7 total files, or up 
to 7 files per date pattern minimum interval.

I'm curious if this has been discussed elsewhere that I may have missed, or if 
this is consistent with others expectation of DefaultRolloverStrategy. If so I 
will update the documentation to be clearer around this point.


> Rolled files are not deleted when a date is used in the pattern
> ---
>
> Key: LOG4J2-2423
> URL: https://issues.apache.org/jira/browse/LOG4J2-2423
> Project: Log4j 2
>  Issue Type: Bug
>  Components: Appenders
>Reporter: Carter Kozak
>Priority: Major
>
> In my appender definition I set 
> filePattern="app/log/trace.%d\{-MM-dd}-%i.log.gz"
> I would expect to see a maximum of 7 rolled trace logs, however I have 
> accumulated over 30.
> While running in a debugger, in AbstractRolloverStrategy.getEligibleFiles I 
> see filePattern set to "trace.2018-08-22-(\d+).log.\*". I would expect the 
> date to be replaced to something along the lines of 
> "trace.(\d+)-(\d+)-(\d+)-(\d+).log.\*"
> Based on the documentation and javadoc this doesn't appear to be entirely 
> unexpected, however it is odd that based on the presence of a date in the 
> file pattern a default rolling file appender may create up to 7 total files, 
> or up to 7 files per date pattern minimum interval.
> I'm curious if this has been discussed elsewhere that I may have missed, or 
> if this is consistent with others expectation of DefaultRolloverStrategy. If 
> so I will update the documentation to be clearer around this point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (LOG4J2-2423) Rolled files are not deleted when a date is used in the pattern

2018-08-28 Thread Carter Kozak (JIRA)
Carter Kozak created LOG4J2-2423:


 Summary: Rolled files are not deleted when a date is used in the 
pattern
 Key: LOG4J2-2423
 URL: https://issues.apache.org/jira/browse/LOG4J2-2423
 Project: Log4j 2
  Issue Type: Bug
  Components: Appenders
Reporter: Carter Kozak


In my appender definition I set 
filePattern="app/log/trace.%d\{-MM-dd}-%i.log.gz"
I would expect to see a maximum of 7 rolled trace logs, however I have 
accumulated over 30.

While running in a debugger, in AbstractRolloverStrategy.getEligibleFiles I see 
filePattern set to "trace.2018-08-22-(\d+).log.*". I would expect the date to 
be replaced to something along the lines of 
"trace.(\d+)-(\d+)-(\d+)-(\d+).log.*"

Based on the documentation and javadoc this doesn't appear to be entirely 
unexpected, however it is odd that based on the presence of a date in the file 
pattern a default rolling file appender may create up to 7 total files, or up 
to 7 files per date pattern minimum interval.

I'm curious if this has been discussed elsewhere that I may have missed, or if 
this is consistent with others expectation of DefaultRolloverStrategy. If so I 
will update the documentation to be clearer around this point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (LOG4J2-2423) Rolled files are not deleted when a date is used in the pattern

2018-08-28 Thread Carter Kozak (JIRA)


 [ 
https://issues.apache.org/jira/browse/LOG4J2-2423?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Carter Kozak reassigned LOG4J2-2423:


Assignee: Carter Kozak

> Rolled files are not deleted when a date is used in the pattern
> ---
>
> Key: LOG4J2-2423
> URL: https://issues.apache.org/jira/browse/LOG4J2-2423
> Project: Log4j 2
>  Issue Type: Bug
>  Components: Appenders
>Reporter: Carter Kozak
>Assignee: Carter Kozak
>Priority: Major
>
> In my appender definition I set 
> filePattern="app/log/trace.%d\{-MM-dd}-%i.log.gz"
> I would expect to see a maximum of 7 rolled trace logs, however I have 
> accumulated over 30.
> While running in a debugger, in AbstractRolloverStrategy.getEligibleFiles I 
> see filePattern set to "trace.2018-08-22-(\d+).log.\*". I would expect the 
> date to be replaced to something along the lines of 
> "trace.(\d+)-(\d+)-(\d+)-(\d+).log.\*"
> Based on the documentation and javadoc this doesn't appear to be entirely 
> unexpected, however it is odd that based on the presence of a date in the 
> file pattern a default rolling file appender may create up to 7 total files, 
> or up to 7 files per date pattern minimum interval.
> I'm curious if this has been discussed elsewhere that I may have missed, or 
> if this is consistent with others expectation of DefaultRolloverStrategy. If 
> so I will update the documentation to be clearer around this point.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (LOGCXX-500) Logging in Timing-Critical Applications

2018-08-28 Thread Denys Smolianiuk (JIRA)


[ 
https://issues.apache.org/jira/browse/LOGCXX-500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16594916#comment-16594916
 ] 

Denys Smolianiuk commented on LOGCXX-500:
-

I did notice few failures, I will look into it.

Thanks,

Denys Smolianiuk

> Logging in Timing-Critical Applications
> ---
>
> Key: LOGCXX-500
> URL: https://issues.apache.org/jira/browse/LOGCXX-500
> Project: Log4cxx
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 0.10.0
>Reporter: Thorsten Schöning
>Priority: Minor
> Attachments: config.xml, main.cpp, non_blocking.diff, 
> non_blocking_wo_debian_control.diff
>
>
> The following has been arrived on the mailing list, providing it here as well 
> mainly to additionally collect the given patches etc.:
> {quote}Hello All,
> I'd like to share some experience as well as some patches with regard 
> to using log4cxx in timing-critical application. First few words about 
> our requirements: it's a service which must generate some network 
> packets with up to hundred of microseconds precision. Thus, it's very 
> important to have predictable code timing. One can argue that log4cxx 
> is not very well suited for such applications, but surprisingly it 
> works pretty well after some light tuning.
> So, what were the issues?
> Basically from library user's point of view they looked the same: one 
> of a sudden logging done with LOG4CXX_DEBUG() macro could take 
> unexpectedly long time to complete. For example the same trace which 
> takes several μs for 99% of the time would take hundreds microseconds 
> and even few milliseconds sometimes. After further investigation this 
> has been traced down to few of the root-causes:
> 1. Asyns logger (which we have been using of course) has internal queue 
> to pass log entries to background disk-writer thread. This queue is 
> mutex-protected which might seem fine unless you think a little bit 
> more about it. First of all, someone calling LOG4CXX_DEBUG() to simply 
> put something into the log might not expect to be blocked inside 
> waiting for a mutex at all. Second point is that, although there were 
> measures taken to minimize time disk-thread holds that lock, 
> OS-schedulers often work in a way that thread which is blocked on a 
> mutex gets de-scheduled. With normal OS-scheduler quantum that means 
> that the logging thread can be preempted for milliseconds.
> 2. There are some mutexes protecting internal states of both loggers 
> and appenders. This means that two separate threads calling 
> LOG4CXX_DEBUG() can block each other. Even if they are using different 
> loggers they would block on appender! This has the same consequences 
> for execution timing and the performance as described above.
> 3. std::stringstream class constructor has some internal locks on it's 
> own. Unfortunately each MessageBuffer has it's own instance of this 
> class. And also unfortunately MessageBuffer is created inside 
> LOG4CXX_DEBUG() macro. There is optimization to not create stringstream 
> for logging simple strings, but as soon as your log statement has 
> single '<<' operator it's created.
> 4. Dynamic memory allocations. Unfortunately there are still quite few 
> of them even though memory pool is used in some other places. Thus, 
> hidden calls to new and malloc induce unpredictable delays.
> So, what we did to mitigate these problems?
> 1. Natural solution for this issue was to use atomic queue. There are 
> few of them available, but we made use of boost::lockfree::queue as it 
> can serve as a drop-in replacement allowing us to keep all present 
> functionality.
> 2. After looking more into the code it has appeared that two concurrent 
> calls to LOG4CXX_DEBUG() from within different threads are not harmful 
> because internal structures of logger and appender are not being 
> changed there. What only really requires protection is concurrency 
> between logging and configuring. Thus, we came to a solution - 
> read-write locks where logging calls act as readers and 
> configuration/exiting calls are writers. With such approach multiple 
> threads calling LOG4CXX_DEBUG() became free of any contention.
> 3. This problem also has one simple solution - make one static 
> std::stringstream object per thread using std::thread_local. 
> Unfortunately we found one drawback - thread_local memory is not 
> released if thread is not detached or joined. As there is some code 
> which does neither of this we made static stringstream a xml file 
> configuration option. Also, there could be an issue with using multiple 
> MessageBuffer instances from within single thread, but LOG4CXX_DEBUG() 
> is not doing that.
> 4. At this time we didn't do anything to address dynamic memory 
> allocation issue.
> So, if you want to give our pat

[jira] [Commented] (LOGCXX-500) Logging in Timing-Critical Applications

2018-08-28 Thread JIRA


[ 
https://issues.apache.org/jira/browse/LOGCXX-500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16594911#comment-16594911
 ] 

Thorsten Schöning commented on LOGCXX-500:
--

Great, compiles now without any issues in my pretty old environment, so the 
changes should be as backwards compatible as things can be. :-) Just out of 
interest, did you run the provided tests as well or only your own? If the tests 
succeed with all your changes enabled, it might be worth it merging the branch 
to master because it shouldn't break anything. If tests fail instead, it might 
be easier to fix changes within the branch on the long term.

> Logging in Timing-Critical Applications
> ---
>
> Key: LOGCXX-500
> URL: https://issues.apache.org/jira/browse/LOGCXX-500
> Project: Log4cxx
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 0.10.0
>Reporter: Thorsten Schöning
>Priority: Minor
> Attachments: config.xml, main.cpp, non_blocking.diff, 
> non_blocking_wo_debian_control.diff
>
>
> The following has been arrived on the mailing list, providing it here as well 
> mainly to additionally collect the given patches etc.:
> {quote}Hello All,
> I'd like to share some experience as well as some patches with regard 
> to using log4cxx in timing-critical application. First few words about 
> our requirements: it's a service which must generate some network 
> packets with up to hundred of microseconds precision. Thus, it's very 
> important to have predictable code timing. One can argue that log4cxx 
> is not very well suited for such applications, but surprisingly it 
> works pretty well after some light tuning.
> So, what were the issues?
> Basically from library user's point of view they looked the same: one 
> of a sudden logging done with LOG4CXX_DEBUG() macro could take 
> unexpectedly long time to complete. For example the same trace which 
> takes several μs for 99% of the time would take hundreds microseconds 
> and even few milliseconds sometimes. After further investigation this 
> has been traced down to few of the root-causes:
> 1. Asyns logger (which we have been using of course) has internal queue 
> to pass log entries to background disk-writer thread. This queue is 
> mutex-protected which might seem fine unless you think a little bit 
> more about it. First of all, someone calling LOG4CXX_DEBUG() to simply 
> put something into the log might not expect to be blocked inside 
> waiting for a mutex at all. Second point is that, although there were 
> measures taken to minimize time disk-thread holds that lock, 
> OS-schedulers often work in a way that thread which is blocked on a 
> mutex gets de-scheduled. With normal OS-scheduler quantum that means 
> that the logging thread can be preempted for milliseconds.
> 2. There are some mutexes protecting internal states of both loggers 
> and appenders. This means that two separate threads calling 
> LOG4CXX_DEBUG() can block each other. Even if they are using different 
> loggers they would block on appender! This has the same consequences 
> for execution timing and the performance as described above.
> 3. std::stringstream class constructor has some internal locks on it's 
> own. Unfortunately each MessageBuffer has it's own instance of this 
> class. And also unfortunately MessageBuffer is created inside 
> LOG4CXX_DEBUG() macro. There is optimization to not create stringstream 
> for logging simple strings, but as soon as your log statement has 
> single '<<' operator it's created.
> 4. Dynamic memory allocations. Unfortunately there are still quite few 
> of them even though memory pool is used in some other places. Thus, 
> hidden calls to new and malloc induce unpredictable delays.
> So, what we did to mitigate these problems?
> 1. Natural solution for this issue was to use atomic queue. There are 
> few of them available, but we made use of boost::lockfree::queue as it 
> can serve as a drop-in replacement allowing us to keep all present 
> functionality.
> 2. After looking more into the code it has appeared that two concurrent 
> calls to LOG4CXX_DEBUG() from within different threads are not harmful 
> because internal structures of logger and appender are not being 
> changed there. What only really requires protection is concurrency 
> between logging and configuring. Thus, we came to a solution - 
> read-write locks where logging calls act as readers and 
> configuration/exiting calls are writers. With such approach multiple 
> threads calling LOG4CXX_DEBUG() became free of any contention.
> 3. This problem also has one simple solution - make one static 
> std::stringstream object per thread using std::thread_local. 
> Unfortunately we found one drawback - thread_local memory is not 
> released if thread is not detached or joined.

[GitHub] logging-log4cxx pull request #7: Fixed build when std::atomic is not availab...

2018-08-28 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/logging-log4cxx/pull/7


---


[jira] [Updated] (LOG4J2-2420) RequestContextFilter logging cleanup

2018-08-28 Thread Andrei Ivanov (JIRA)


 [ 
https://issues.apache.org/jira/browse/LOG4J2-2420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrei Ivanov updated LOG4J2-2420:
--
Description: 
The {{RequestContextFilter}} logs some details that should be done on {{DEBUG}} 
or {{TRACE}} instead of {{INFO}}.
{noformat}
INFO  [org.apache.logging.log4j.audit.rest.RequestContextFilter] - Starting 
request {}/productrepository/2594
INFO  [org.apache.logging.log4j.audit.rest.RequestContextFilter] - Request 
/productrepository/2594 completed in 0.139279500 seconds
{noformat}
There's even a small bug on the 1st 
[line|https://github.com/apache/logging-log4j-audit/blob/9bab5dad26e67642573cbc8257b6cbcafb23bf3c/log4j-audit/log4j-audit-api/src/main/java/org/apache/logging/log4j/audit/rest/RequestContextFilter.java#L82]
 where concatenation was used.
 The calculation of the request duration should also be moved in an if block, 
to avoid it if it won't be logged.

Same for the Maven plugin, I think it should print the name of the generated 
classes only on debug.

  was:
The {{RequestContextFilter}} logs some details that should be done on {{DEBUG}} 
or {{TRACE}} instead of {{INFO}}.
{noformat}
INFO  [org.apache.logging.log4j.audit.rest.RequestContextFilter] - Starting 
request {}/productrepository/2594
INFO  [org.apache.logging.log4j.audit.rest.RequestContextFilter] - Request 
/productrepository/2594 completed in 0.139279500 seconds
{noformat}
There's even a small bug on the 1st 
[line|https://github.com/apache/logging-log4j-audit/blob/9bab5dad26e67642573cbc8257b6cbcafb23bf3c/log4j-audit/log4j-audit-api/src/main/java/org/apache/logging/log4j/audit/rest/RequestContextFilter.java#L82]
 where concatenation was used.
 The calculation of the request duration should also be moved in an if block, 
to avoid it if it won't be logged.


> RequestContextFilter logging cleanup
> 
>
> Key: LOG4J2-2420
> URL: https://issues.apache.org/jira/browse/LOG4J2-2420
> Project: Log4j 2
>  Issue Type: Improvement
>  Components: Log4j-Audit
>Affects Versions: Log4j-Audit 1.0.0
>Reporter: Andrei Ivanov
>Priority: Minor
>
> The {{RequestContextFilter}} logs some details that should be done on 
> {{DEBUG}} or {{TRACE}} instead of {{INFO}}.
> {noformat}
> INFO  [org.apache.logging.log4j.audit.rest.RequestContextFilter] - Starting 
> request {}/productrepository/2594
> INFO  [org.apache.logging.log4j.audit.rest.RequestContextFilter] - Request 
> /productrepository/2594 completed in 0.139279500 seconds
> {noformat}
> There's even a small bug on the 1st 
> [line|https://github.com/apache/logging-log4j-audit/blob/9bab5dad26e67642573cbc8257b6cbcafb23bf3c/log4j-audit/log4j-audit-api/src/main/java/org/apache/logging/log4j/audit/rest/RequestContextFilter.java#L82]
>  where concatenation was used.
>  The calculation of the request duration should also be moved in an if block, 
> to avoid it if it won't be logged.
> Same for the Maven plugin, I think it should print the name of the generated 
> classes only on debug.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (LOG4J2-1002) PatternLayout is missing a new line for Exceptions with the short option

2018-08-28 Thread ASF GitHub Bot (JIRA)


[ 
https://issues.apache.org/jira/browse/LOG4J2-1002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16594839#comment-16594839
 ] 

ASF GitHub Bot commented on LOG4J2-1002:


GitHub user quaff opened a pull request:

https://github.com/apache/logging-log4j2/pull/214

LOG4J2-1002 - ThrowablePatternConverter should preserve EOF

throwable.printStackTrace() will produce EOF, but 
w.toString().split(Strings.LINE_SEPARATOR) drop it, buffer should append 
newline after loop.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/quaff/logging-log4j2 master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/logging-log4j2/pull/214.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #214


commit 8eeaf69d0e4ca979dc2014e84170234d246cd75b
Author: Yanming Zhou 
Date:   2018-08-28T11:06:19Z

ThrowablePatternConverter should preserve EOF

LOG4J2-1002

Signed-off-by: Yanming Zhou 




> PatternLayout is missing a new line for Exceptions with the short option
> 
>
> Key: LOG4J2-1002
> URL: https://issues.apache.org/jira/browse/LOG4J2-1002
> Project: Log4j 2
>  Issue Type: Bug
>  Components: Layouts, Pattern Converters
>Affects Versions: 2.2
> Environment: Windows, eclipse
>Reporter: Robert Schaft
>Priority: Major
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I am struggeling to get the PatternLayout right when using %throwable, %ex or 
> similar.
> The first problem is, that if the exception is limited by the number of lines 
> (e.g. with the option {{short}} or providing a number), the converter 
> {{ExtendedThrowablePatternConverter}} doesn't attach a newline to the end of 
> the stack.
> On the other hand it does attach a newline at the end of the full stack.
> That is why
> {quote}
> {{}}
> {quote}
> produces the expected result for message with and without throwables: there 
> are no empty lines in the log file and every log message starts in a new 
> line. 
> What about {{%ex\{short\}}}?
> {quote}
> {{}}
> {quote}
> This has the problem that messages with throwables do not end with a new 
> line. This produces all kinds of problems.
> Ok, let's add a newline
> {quote}
> {{ />}}
> {quote}
> This has the problem that messages _without_ throwables are followed by an 
> empty line. This is not acceptable on the console.
> So we need something more complicated:
> {quote}
> {{ pattern="%msg%replace\{%n%ex\{short\}%n\}\{\[\r\n]+$\}\{\}%n" />}}
> {quote}
> Yeah! It works (at least on Windows, Unix, Linux, Mac) and if the 
> undocumented throwable {{separator}} option is not used. But it's ugly and 
> requires the alwaysWriteExceptions because the throwable pattern detection 
> does not work any more.
> Short Term solution: Always add a newline to the exception.
> Long Term Solution:
> Add a conversion pattern {{%onThrowable\{pattern1\}\[\{pattern2\}]}} where 
> pattern1 is appended when there is a throwable attached to the log message 
> and the optional pattern2 is appended when there is no throwable appended.
> The {{alwaysWriteExceptions="false"}} parameter could be replaced by 
> {{pattern="%onThrowable\{\}"}} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] logging-log4j2 pull request #214: LOG4J2-1002 - ThrowablePatternConverter sh...

2018-08-28 Thread quaff
GitHub user quaff opened a pull request:

https://github.com/apache/logging-log4j2/pull/214

LOG4J2-1002 - ThrowablePatternConverter should preserve EOF

throwable.printStackTrace() will produce EOF, but 
w.toString().split(Strings.LINE_SEPARATOR) drop it, buffer should append 
newline after loop.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/quaff/logging-log4j2 master

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/logging-log4j2/pull/214.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #214


commit 8eeaf69d0e4ca979dc2014e84170234d246cd75b
Author: Yanming Zhou 
Date:   2018-08-28T11:06:19Z

ThrowablePatternConverter should preserve EOF

LOG4J2-1002

Signed-off-by: Yanming Zhou 




---


[jira] [Commented] (LOG4J2-1002) PatternLayout is missing a new line for Exceptions with the short option

2018-08-28 Thread Yanming Zhou (JIRA)


[ 
https://issues.apache.org/jira/browse/LOG4J2-1002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16594803#comment-16594803
 ] 

Yanming Zhou commented on LOG4J2-1002:
--

https://github.com/apache/logging-log4j2/blob/master/log4j-core/src/main/java/org/apache/logging/log4j/core/pattern/ThrowablePatternConverter.java#L185

{code:java}
final StringWriter w = new StringWriter();
throwable.printStackTrace(new PrintWriter(w));

final String[] array = w.toString().split(Strings.LINE_SEPARATOR);
final int limit = options.minLines(array.length) - 1;
final boolean suffixNotBlank = Strings.isNotBlank(suffix);
for (int i = 0; i <= limit; ++i) {
buffer.append(array[i]);
if (suffixNotBlank) {
buffer.append(' ');
buffer.append(suffix);
}
if (i < limit) {
buffer.append(options.getSeparator());
}
}
buffer.append("\n"); // this line should fix it
{code}

throwable.printStackTrace() will produce newline at end, and 
w.toString().split(Strings.LINE_SEPARATOR) lost it, buffer should append 
newline after loop.

> PatternLayout is missing a new line for Exceptions with the short option
> 
>
> Key: LOG4J2-1002
> URL: https://issues.apache.org/jira/browse/LOG4J2-1002
> Project: Log4j 2
>  Issue Type: Bug
>  Components: Layouts, Pattern Converters
>Affects Versions: 2.2
> Environment: Windows, eclipse
>Reporter: Robert Schaft
>Priority: Major
>   Original Estimate: 336h
>  Remaining Estimate: 336h
>
> I am struggeling to get the PatternLayout right when using %throwable, %ex or 
> similar.
> The first problem is, that if the exception is limited by the number of lines 
> (e.g. with the option {{short}} or providing a number), the converter 
> {{ExtendedThrowablePatternConverter}} doesn't attach a newline to the end of 
> the stack.
> On the other hand it does attach a newline at the end of the full stack.
> That is why
> {quote}
> {{}}
> {quote}
> produces the expected result for message with and without throwables: there 
> are no empty lines in the log file and every log message starts in a new 
> line. 
> What about {{%ex\{short\}}}?
> {quote}
> {{}}
> {quote}
> This has the problem that messages with throwables do not end with a new 
> line. This produces all kinds of problems.
> Ok, let's add a newline
> {quote}
> {{ />}}
> {quote}
> This has the problem that messages _without_ throwables are followed by an 
> empty line. This is not acceptable on the console.
> So we need something more complicated:
> {quote}
> {{ pattern="%msg%replace\{%n%ex\{short\}%n\}\{\[\r\n]+$\}\{\}%n" />}}
> {quote}
> Yeah! It works (at least on Windows, Unix, Linux, Mac) and if the 
> undocumented throwable {{separator}} option is not used. But it's ugly and 
> requires the alwaysWriteExceptions because the throwable pattern detection 
> does not work any more.
> Short Term solution: Always add a newline to the exception.
> Long Term Solution:
> Add a conversion pattern {{%onThrowable\{pattern1\}\[\{pattern2\}]}} where 
> pattern1 is appended when there is a throwable attached to the log message 
> and the optional pattern2 is appended when there is no throwable appended.
> The {{alwaysWriteExceptions="false"}} parameter could be replaced by 
> {{pattern="%onThrowable\{\}"}} 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (LOG4J2-2412) Cannot create log4j2 logfile if path contains plus '+' characters - dest tag in configuration

2018-08-28 Thread Karel (JIRA)


 [ 
https://issues.apache.org/jira/browse/LOG4J2-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karel updated LOG4J2-2412:
--
Priority: Major  (was: Minor)

> Cannot create log4j2 logfile if path contains plus '+' characters - dest tag 
> in configuration
> -
>
> Key: LOG4J2-2412
> URL: https://issues.apache.org/jira/browse/LOG4J2-2412
> Project: Log4j 2
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.10.0, 2.11.1
> Environment: non OS specific - happens on Linux/Windows
>Reporter: Karel
>Priority: Major
>
> Hello,
> In my application I am loading prepared XML configuration for logging.
> The configuration contains
> {code:java}
> // configuration.xml
> 
> 
> {code}
>  For this case everything is working fine. But I accidentaly put "+" into path
> {code:java}
> // configuration.xml
> 
> 
> {code}
> which starts writing into console:
> {code:java}
> 2018-08-16 08:40:49,668 main ERROR File could not be found at 
> [/home/kure/MyApp+/log/log4j2.log]. Falling back to default of stdout.
> {code}
>  
> after some debugging  I found that in class:
> org.apache.logging.log4j.core.helpers.FileUtils.java
> {code:java}
> // org.apache.logging.log4j.core.helpers.FileUtils.java
> String fileName = uri.toURL().getFile();
> if (new File(fileName).exists()) { // LOG4J2-466
> return new File(fileName); // allow files with '+' char in name
> }
> fileName = URLDecoder.decode(fileName, charsetName);
> {code}
> line:
> {code:java}
> // org.apache.logging.log4j.core.helpers.FileUtils.java
> //home/kure/.m2/repository/org/apache/logging/log4j/log4j-core/2.11.0/log4j-core-2.11.0-sources.jar!/org/apache/logging/log4j/core/util/FileUtils.java:91
> return new File(URLDecoder.decode(fileName, "UTF8"));
> {code}
> converts "+" to " " (space).
> Workaround for this problem is to manually create the file.
> Until now I rely on that log4j will automatically create the missing files. - 
> Is this a bad habbit?
> This is happening only for "configuration's element *dest*". Appenders 
> (RollingFiles) are not affected by this issue.
> As mentioned in your source code this issue is very similar to LOG4J2-466
> Thanks for your respond
> Karel Cerman
>  
> updated:
>  If i use more crazy path like 
> {code:java}
> // configuration.xml
> 
> 
> {code}
> Then even workaround with creating file on path not works -> priority increase
> Thanks for message
> Karel Cerman



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (LOG4J2-2412) Cannot create log4j2 logfile if path contains plus '+' characters - dest tag in configuration

2018-08-28 Thread Karel (JIRA)


 [ 
https://issues.apache.org/jira/browse/LOG4J2-2412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karel updated LOG4J2-2412:
--
Description: 
Hello,

In my application I am loading prepared XML configuration for logging.

The configuration contains
{code:java}
// configuration.xml


{code}
 For this case everything is working fine. But I accidentaly put "+" into path
{code:java}
// configuration.xml


{code}
which starts writing into console:
{code:java}
2018-08-16 08:40:49,668 main ERROR File could not be found at 
[/home/kure/MyApp+/log/log4j2.log]. Falling back to default of stdout.
{code}
 

after some debugging  I found that in class:

org.apache.logging.log4j.core.helpers.FileUtils.java
{code:java}
// org.apache.logging.log4j.core.helpers.FileUtils.java
String fileName = uri.toURL().getFile();
if (new File(fileName).exists()) { // LOG4J2-466
return new File(fileName); // allow files with '+' char in name
}
fileName = URLDecoder.decode(fileName, charsetName);
{code}
line:
{code:java}
// org.apache.logging.log4j.core.helpers.FileUtils.java
//home/kure/.m2/repository/org/apache/logging/log4j/log4j-core/2.11.0/log4j-core-2.11.0-sources.jar!/org/apache/logging/log4j/core/util/FileUtils.java:91
return new File(URLDecoder.decode(fileName, "UTF8"));
{code}
converts "+" to " " (space).

Workaround for this problem is to manually create the file.

Until now I rely on that log4j will automatically create the missing files. - 
Is this a bad habbit?

This is happening only for "configuration's element *dest*". Appenders 
(RollingFiles) are not affected by this issue.

As mentioned in your source code this issue is very similar to LOG4J2-466

Thanks for your respond

Karel Cerman

 

updated:

 If i use more crazy path like 
{code:java}
// configuration.xml


{code}
Then even workaround with creating file on path not works -> priority increase

Thanks for message
Karel Cerman

  was:
Hello,

In my application I am loading prepared XML configuration for logging.

The configuration contains
{code:java}
// configuration.xml


{code}
 For this case everything is working fine. But I accidentaly put "+" into path
{code:java}
// configuration.xml


{code}
which starts writing into console:
{code:java}
2018-08-16 08:40:49,668 main ERROR File could not be found at 
[/home/kure/MyApp+/log/log4j2.log]. Falling back to default of stdout.
{code}
 

after some debugging  I found that in class:

org.apache.logging.log4j.core.helpers.FileUtils.java
{code:java}
// org.apache.logging.log4j.core.helpers.FileUtils.java
String fileName = uri.toURL().getFile();
if (new File(fileName).exists()) { // LOG4J2-466
return new File(fileName); // allow files with '+' char in name
}
fileName = URLDecoder.decode(fileName, charsetName);
{code}
line:
{code:java}
// org.apache.logging.log4j.core.helpers.FileUtils.java
//home/kure/.m2/repository/org/apache/logging/log4j/log4j-core/2.11.0/log4j-core-2.11.0-sources.jar!/org/apache/logging/log4j/core/util/FileUtils.java:91
return new File(URLDecoder.decode(fileName, "UTF8"));
{code}
converts "+" to " " (space).

Workaround for this problem is to manually create the file.

Until now I rely on that log4j will automatically create the missing files. - 
Is this a bad habbit?

This is happening only for "configuration's element *dest*". Appenders 
(RollingFiles) are not affected by this issue.

As mentioned in your source code this issue is very similar to LOG4J2-466

Thanks for your respond

Karel Cerman


> Cannot create log4j2 logfile if path contains plus '+' characters - dest tag 
> in configuration
> -
>
> Key: LOG4J2-2412
> URL: https://issues.apache.org/jira/browse/LOG4J2-2412
> Project: Log4j 2
>  Issue Type: Bug
>  Components: Core
>Affects Versions: 2.10.0, 2.11.1
> Environment: non OS specific - happens on Linux/Windows
>Reporter: Karel
>Priority: Minor
>
> Hello,
> In my application I am loading prepared XML configuration for logging.
> The configuration contains
> {code:java}
> // configuration.xml
> 
> 
> {code}
>  For this case everything is working fine. But I accidentaly put "+" into path
> {code:java}
> // configuration.xml
> 
> 
> {code}
> which starts writing into console:
> {code:java}
> 2018-08-16 08:40:49,668 main ERROR File could not be found at 
> [/home/kure/MyApp+/log/log4j2.log]. Falling back to default of stdout.
> {code}
>  
> after some debugging  I found that in class:
> org.apache.logging.log4j.core.helpers.FileUtils.java
> {code:java}
> // org.apache.logging.log4j.core.helpers.FileUtils.java
> String fileName = uri.toURL().getFile();
> if (new File(fileName).exists()) { // LOG4J2-466
> return new File(fileName); // allow files with '+' char in name
> }
> fileName = URLDecoder.decode(fileNam

[GitHub] logging-log4cxx pull request #7: Fixed build when std::atomic is not availab...

2018-08-28 Thread DenysSmolianiuk
GitHub user DenysSmolianiuk opened a pull request:

https://github.com/apache/logging-log4cxx/pull/7

Fixed build when std::atomic is not available



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/DenysSmolianiuk/logging-log4cxx LOGCXX-500

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/logging-log4cxx/pull/7.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #7


commit 23f1d35d377e77392a8b81f3274d3e2dc09e79c3
Author: Denys Smolianiuk 
Date:   2018-08-28T08:36:31Z

Fixed build when std::atomic is not available




---


[jira] [Commented] (LOGCXX-500) Logging in Timing-Critical Applications

2018-08-28 Thread Denys Smolianiuk (JIRA)


[ 
https://issues.apache.org/jira/browse/LOGCXX-500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16594696#comment-16594696
 ] 

Denys Smolianiuk commented on LOGCXX-500:
-

It's probably failing on include? I must have overlooked that. It would be 
easier to surround it with ifdef because in order to use apr atomic one would 
need to know exact bit width of the underlying type which isn't that  
straightforward for apr_os_thread_t. I will submit PR shortly.

> Logging in Timing-Critical Applications
> ---
>
> Key: LOGCXX-500
> URL: https://issues.apache.org/jira/browse/LOGCXX-500
> Project: Log4cxx
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 0.10.0
>Reporter: Thorsten Schöning
>Priority: Minor
> Attachments: config.xml, main.cpp, non_blocking.diff, 
> non_blocking_wo_debian_control.diff
>
>
> The following has been arrived on the mailing list, providing it here as well 
> mainly to additionally collect the given patches etc.:
> {quote}Hello All,
> I'd like to share some experience as well as some patches with regard 
> to using log4cxx in timing-critical application. First few words about 
> our requirements: it's a service which must generate some network 
> packets with up to hundred of microseconds precision. Thus, it's very 
> important to have predictable code timing. One can argue that log4cxx 
> is not very well suited for such applications, but surprisingly it 
> works pretty well after some light tuning.
> So, what were the issues?
> Basically from library user's point of view they looked the same: one 
> of a sudden logging done with LOG4CXX_DEBUG() macro could take 
> unexpectedly long time to complete. For example the same trace which 
> takes several μs for 99% of the time would take hundreds microseconds 
> and even few milliseconds sometimes. After further investigation this 
> has been traced down to few of the root-causes:
> 1. Asyns logger (which we have been using of course) has internal queue 
> to pass log entries to background disk-writer thread. This queue is 
> mutex-protected which might seem fine unless you think a little bit 
> more about it. First of all, someone calling LOG4CXX_DEBUG() to simply 
> put something into the log might not expect to be blocked inside 
> waiting for a mutex at all. Second point is that, although there were 
> measures taken to minimize time disk-thread holds that lock, 
> OS-schedulers often work in a way that thread which is blocked on a 
> mutex gets de-scheduled. With normal OS-scheduler quantum that means 
> that the logging thread can be preempted for milliseconds.
> 2. There are some mutexes protecting internal states of both loggers 
> and appenders. This means that two separate threads calling 
> LOG4CXX_DEBUG() can block each other. Even if they are using different 
> loggers they would block on appender! This has the same consequences 
> for execution timing and the performance as described above.
> 3. std::stringstream class constructor has some internal locks on it's 
> own. Unfortunately each MessageBuffer has it's own instance of this 
> class. And also unfortunately MessageBuffer is created inside 
> LOG4CXX_DEBUG() macro. There is optimization to not create stringstream 
> for logging simple strings, but as soon as your log statement has 
> single '<<' operator it's created.
> 4. Dynamic memory allocations. Unfortunately there are still quite few 
> of them even though memory pool is used in some other places. Thus, 
> hidden calls to new and malloc induce unpredictable delays.
> So, what we did to mitigate these problems?
> 1. Natural solution for this issue was to use atomic queue. There are 
> few of them available, but we made use of boost::lockfree::queue as it 
> can serve as a drop-in replacement allowing us to keep all present 
> functionality.
> 2. After looking more into the code it has appeared that two concurrent 
> calls to LOG4CXX_DEBUG() from within different threads are not harmful 
> because internal structures of logger and appender are not being 
> changed there. What only really requires protection is concurrency 
> between logging and configuring. Thus, we came to a solution - 
> read-write locks where logging calls act as readers and 
> configuration/exiting calls are writers. With such approach multiple 
> threads calling LOG4CXX_DEBUG() became free of any contention.
> 3. This problem also has one simple solution - make one static 
> std::stringstream object per thread using std::thread_local. 
> Unfortunately we found one drawback - thread_local memory is not 
> released if thread is not detached or joined. As there is some code 
> which does neither of this we made static stringstream a xml file 
> configuration option. Also, there could be an issue with using multiple 
>

[jira] [Created] (LOG4J2-2422) Consider handling unchecked exceptions while loading plugins

2018-08-28 Thread rswart (JIRA)
rswart created LOG4J2-2422:
--

 Summary: Consider handling unchecked exceptions while loading 
plugins
 Key: LOG4J2-2422
 URL: https://issues.apache.org/jira/browse/LOG4J2-2422
 Project: Log4j 2
  Issue Type: Bug
  Components: Plugins
Affects Versions: 2.8.2
Reporter: rswart


The PluginRegistry handles [ClassNotFoundException 
|[https://i/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]|https://github.com/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]]
 but does not handle unchecked exceptions like NoClassDefFoundError. As a 
result applications may not start when loading of a plugin fails with an 
unchecked exception.

 

Here is the scenario we ran into:

 

We use [logstash-gelf|http://logging.paluch.biz/] in a standardized Tomcat 
docker images to send Java Util Logging (as used by Tomcat) to Graylog. To do 
this we add the logstash-gelf jar to the $CATALINA_HOME/lib directory, 
effectively placing it on Tomcat's common loader classpath. In essence there is 
no log4j involved, but the logstash-gelf jar contains integrations for various 
logging frameworks, including a log4j2 appender.

When a webapplication that is deployed on this Tomcat instance uses log4j2 as 
logging framework the logstash-gelf appender is found during plugin scanning 
and the PluginRegistry tries to load it (even if the appender is not used in 
the log4j configuration). The logstash-gelf plugin is not loaded via the 
webapplication classloader, but through the parent common loader. It can find 
the plugin class but not the dependent log4j2 classes as they are only on the 
classpath of the webapplication classloader:

 
{code:java}
java.lang.NoClassDefFoundError: 
org/apache/logging/log4j/core/appender/AbstractAppender
{code}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)