[jira] [Closed] (LOG4J2-2422) Handle some unchecked exceptions while loading plugins

2018-08-29 Thread rswart (JIRA)


 [ 
https://issues.apache.org/jira/browse/LOG4J2-2422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rswart closed LOG4J2-2422.
--

Tested using log4j-core-2.11.2-20180828.224911-32.jar. Application now starts 
successfully.

I did not see the log message 'Plugin [{}] could not be loaded due to linkage 
error.' in my logs, but that could be due to my setup.

 

Thank you for the quick fix!

> Handle some unchecked exceptions while loading plugins
> --
>
> Key: LOG4J2-2422
> URL: https://issues.apache.org/jira/browse/LOG4J2-2422
> Project: Log4j 2
>  Issue Type: Bug
>  Components: Plugins
>Affects Versions: 2.8.2
>Reporter: rswart
>Assignee: Gary Gregory
>Priority: Major
> Fix For: 3.0.0, 2.11.2
>
>
> The PluginRegistry handles [ClassNotFoundException 
> |[https://i/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]|https://github.com/apache/logging-log4j2/blob/e741549928b2acbcb2d11ad285aa84ee88728e49/log4j-core/src/main/java/org/apache/logging/log4j/core/config/plugins/util/PluginRegistry.java#L185]]
>  but does not handle unchecked exceptions like NoClassDefFoundError. As a 
> result applications may not start when loading of a plugin fails with an 
> unchecked exception.
>  
> Here is the scenario we ran into:
>  
> We use [logstash-gelf|http://logging.paluch.biz/] in a standardized Tomcat 
> docker images to send Java Util Logging (as used by Tomcat) to Graylog. To do 
> this we add the logstash-gelf jar to the $CATALINA_HOME/lib directory, 
> effectively placing it on Tomcat's common loader classpath. In essence there 
> is no log4j involved, but the logstash-gelf jar contains integrations for 
> various logging frameworks, including a log4j2 appender.
> When a webapplication that is deployed on this Tomcat instance uses log4j2 as 
> logging framework the logstash-gelf appender is found during plugin scanning 
> and the PluginRegistry tries to load it (even if the appender is not used in 
> the log4j configuration). The logstash-gelf plugin is not loaded via the 
> webapplication classloader, but through the parent common loader. It can find 
> the plugin class but not the dependent log4j2 classes as they are only on the 
> classpath of the webapplication classloader:
>  
> {code:java}
> java.lang.NoClassDefFoundError: 
> org/apache/logging/log4j/core/appender/AbstractAppender
> {code}
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (LOGCXX-500) Logging in Timing-Critical Applications

2018-08-29 Thread JIRA


[ 
https://issues.apache.org/jira/browse/LOGCXX-500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16596471#comment-16596471
 ] 

Thorsten Schöning commented on LOGCXX-500:
--

I've merged again and all tests still pass for me as well. So, it's time to 
merge to master? Any objections or opinions? If it works for me without 
changing anything I'm pretty sure it's pretty backwards compatible.

> Logging in Timing-Critical Applications
> ---
>
> Key: LOGCXX-500
> URL: https://issues.apache.org/jira/browse/LOGCXX-500
> Project: Log4cxx
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 0.10.0
>Reporter: Thorsten Schöning
>Priority: Minor
> Attachments: config.xml, main.cpp, non_blocking.diff, 
> non_blocking_wo_debian_control.diff
>
>
> The following has been arrived on the mailing list, providing it here as well 
> mainly to additionally collect the given patches etc.:
> {quote}Hello All,
> I'd like to share some experience as well as some patches with regard 
> to using log4cxx in timing-critical application. First few words about 
> our requirements: it's a service which must generate some network 
> packets with up to hundred of microseconds precision. Thus, it's very 
> important to have predictable code timing. One can argue that log4cxx 
> is not very well suited for such applications, but surprisingly it 
> works pretty well after some light tuning.
> So, what were the issues?
> Basically from library user's point of view they looked the same: one 
> of a sudden logging done with LOG4CXX_DEBUG() macro could take 
> unexpectedly long time to complete. For example the same trace which 
> takes several μs for 99% of the time would take hundreds microseconds 
> and even few milliseconds sometimes. After further investigation this 
> has been traced down to few of the root-causes:
> 1. Asyns logger (which we have been using of course) has internal queue 
> to pass log entries to background disk-writer thread. This queue is 
> mutex-protected which might seem fine unless you think a little bit 
> more about it. First of all, someone calling LOG4CXX_DEBUG() to simply 
> put something into the log might not expect to be blocked inside 
> waiting for a mutex at all. Second point is that, although there were 
> measures taken to minimize time disk-thread holds that lock, 
> OS-schedulers often work in a way that thread which is blocked on a 
> mutex gets de-scheduled. With normal OS-scheduler quantum that means 
> that the logging thread can be preempted for milliseconds.
> 2. There are some mutexes protecting internal states of both loggers 
> and appenders. This means that two separate threads calling 
> LOG4CXX_DEBUG() can block each other. Even if they are using different 
> loggers they would block on appender! This has the same consequences 
> for execution timing and the performance as described above.
> 3. std::stringstream class constructor has some internal locks on it's 
> own. Unfortunately each MessageBuffer has it's own instance of this 
> class. And also unfortunately MessageBuffer is created inside 
> LOG4CXX_DEBUG() macro. There is optimization to not create stringstream 
> for logging simple strings, but as soon as your log statement has 
> single '<<' operator it's created.
> 4. Dynamic memory allocations. Unfortunately there are still quite few 
> of them even though memory pool is used in some other places. Thus, 
> hidden calls to new and malloc induce unpredictable delays.
> So, what we did to mitigate these problems?
> 1. Natural solution for this issue was to use atomic queue. There are 
> few of them available, but we made use of boost::lockfree::queue as it 
> can serve as a drop-in replacement allowing us to keep all present 
> functionality.
> 2. After looking more into the code it has appeared that two concurrent 
> calls to LOG4CXX_DEBUG() from within different threads are not harmful 
> because internal structures of logger and appender are not being 
> changed there. What only really requires protection is concurrency 
> between logging and configuring. Thus, we came to a solution - 
> read-write locks where logging calls act as readers and 
> configuration/exiting calls are writers. With such approach multiple 
> threads calling LOG4CXX_DEBUG() became free of any contention.
> 3. This problem also has one simple solution - make one static 
> std::stringstream object per thread using std::thread_local. 
> Unfortunately we found one drawback - thread_local memory is not 
> released if thread is not detached or joined. As there is some code 
> which does neither of this we made static stringstream a xml file 
> configuration option. Also, there could be an issue with using multiple 
> MessageBuffer instances from within single thread, but LOG4CXX_DEBUG

[GitHub] logging-log4cxx pull request #8: Logcxx 500

2018-08-29 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/logging-log4cxx/pull/8


---


[GitHub] logging-log4cxx pull request #8: Logcxx 500

2018-08-29 Thread DenysSmolianiuk
GitHub user DenysSmolianiuk opened a pull request:

https://github.com/apache/logging-log4cxx/pull/8

Logcxx 500



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/DenysSmolianiuk/logging-log4cxx LOGCXX-500

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/logging-log4cxx/pull/8.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #8


commit a48691731b7f36f9909df55e7fc8e401554bb049
Author: Denys Smolianiuk 
Date:   2018-08-29T13:49:10Z

Before closing process events already present in queue

commit 285f36f698c37a27d1caed36b941cb041a3080ee
Author: Denys Smolianiuk 
Date:   2018-08-29T13:50:28Z

Increase number of events in unit-test as it is not possible to descrease 
capacity of boost atomic queue




---


[jira] [Commented] (LOGCXX-500) Logging in Timing-Critical Applications

2018-08-29 Thread Denys Smolianiuk (JIRA)


[ 
https://issues.apache.org/jira/browse/LOGCXX-500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16596367#comment-16596367
 ] 

Denys Smolianiuk commented on LOGCXX-500:
-

There were two new unit-tests failures. The first was because I slightly 
changed behavior for non-blocking async logger and messages present in queue 
had been dropping on close. I fixed that.

The second issue is because boost atomic queue does not actually decrease 
underlying capacity (just like std::vector). So, to work-around that I had to 
increase the number of events in unit-test.

Thank you,

Denys Smolianiuk

> Logging in Timing-Critical Applications
> ---
>
> Key: LOGCXX-500
> URL: https://issues.apache.org/jira/browse/LOGCXX-500
> Project: Log4cxx
>  Issue Type: Improvement
>  Components: Core
>Affects Versions: 0.10.0
>Reporter: Thorsten Schöning
>Priority: Minor
> Attachments: config.xml, main.cpp, non_blocking.diff, 
> non_blocking_wo_debian_control.diff
>
>
> The following has been arrived on the mailing list, providing it here as well 
> mainly to additionally collect the given patches etc.:
> {quote}Hello All,
> I'd like to share some experience as well as some patches with regard 
> to using log4cxx in timing-critical application. First few words about 
> our requirements: it's a service which must generate some network 
> packets with up to hundred of microseconds precision. Thus, it's very 
> important to have predictable code timing. One can argue that log4cxx 
> is not very well suited for such applications, but surprisingly it 
> works pretty well after some light tuning.
> So, what were the issues?
> Basically from library user's point of view they looked the same: one 
> of a sudden logging done with LOG4CXX_DEBUG() macro could take 
> unexpectedly long time to complete. For example the same trace which 
> takes several μs for 99% of the time would take hundreds microseconds 
> and even few milliseconds sometimes. After further investigation this 
> has been traced down to few of the root-causes:
> 1. Asyns logger (which we have been using of course) has internal queue 
> to pass log entries to background disk-writer thread. This queue is 
> mutex-protected which might seem fine unless you think a little bit 
> more about it. First of all, someone calling LOG4CXX_DEBUG() to simply 
> put something into the log might not expect to be blocked inside 
> waiting for a mutex at all. Second point is that, although there were 
> measures taken to minimize time disk-thread holds that lock, 
> OS-schedulers often work in a way that thread which is blocked on a 
> mutex gets de-scheduled. With normal OS-scheduler quantum that means 
> that the logging thread can be preempted for milliseconds.
> 2. There are some mutexes protecting internal states of both loggers 
> and appenders. This means that two separate threads calling 
> LOG4CXX_DEBUG() can block each other. Even if they are using different 
> loggers they would block on appender! This has the same consequences 
> for execution timing and the performance as described above.
> 3. std::stringstream class constructor has some internal locks on it's 
> own. Unfortunately each MessageBuffer has it's own instance of this 
> class. And also unfortunately MessageBuffer is created inside 
> LOG4CXX_DEBUG() macro. There is optimization to not create stringstream 
> for logging simple strings, but as soon as your log statement has 
> single '<<' operator it's created.
> 4. Dynamic memory allocations. Unfortunately there are still quite few 
> of them even though memory pool is used in some other places. Thus, 
> hidden calls to new and malloc induce unpredictable delays.
> So, what we did to mitigate these problems?
> 1. Natural solution for this issue was to use atomic queue. There are 
> few of them available, but we made use of boost::lockfree::queue as it 
> can serve as a drop-in replacement allowing us to keep all present 
> functionality.
> 2. After looking more into the code it has appeared that two concurrent 
> calls to LOG4CXX_DEBUG() from within different threads are not harmful 
> because internal structures of logger and appender are not being 
> changed there. What only really requires protection is concurrency 
> between logging and configuring. Thus, we came to a solution - 
> read-write locks where logging calls act as readers and 
> configuration/exiting calls are writers. With such approach multiple 
> threads calling LOG4CXX_DEBUG() became free of any contention.
> 3. This problem also has one simple solution - make one static 
> std::stringstream object per thread using std::thread_local. 
> Unfortunately we found one drawback - thread_local memory is not 
> released if thread is not detached or joined. As there is some code 
> which doe

[jira] [Updated] (LOG4J2-2424) Process ID (pid) lookup

2018-08-29 Thread Simon Schneider (JIRA)


 [ 
https://issues.apache.org/jira/browse/LOG4J2-2424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Schneider updated LOG4J2-2424:

Description: 
Similar to LOG4J2-1884 it would be great to be able to use {{%pid}} as lookup 
e.g. to seperate log files from different processes.

 

Use Case: Run multiple instances of the same application on the same host and 
have a way to separate their log files that allows to correlate the running 
process to the correct log file.

  was:Similar to LOG4J2-1884 it would be great to be able to use {{%pid}} as 
lookup e.g. to seperate log files from different processes.


> Process ID (pid) lookup
> ---
>
> Key: LOG4J2-2424
> URL: https://issues.apache.org/jira/browse/LOG4J2-2424
> Project: Log4j 2
>  Issue Type: New Feature
>  Components: Lookups
>Affects Versions: 2.11.1
>Reporter: Simon Schneider
>Priority: Major
>
> Similar to LOG4J2-1884 it would be great to be able to use {{%pid}} as lookup 
> e.g. to seperate log files from different processes.
>  
> Use Case: Run multiple instances of the same application on the same host and 
> have a way to separate their log files that allows to correlate the running 
> process to the correct log file.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)