carterkozak commented on pull request #452: URL: https://github.com/apache/logging-log4j2/pull/452#issuecomment-752268188
> Then the hell broke loose. I don't think it's that bad! We fixed a problem for some users, and caused an issue for others. I'd consider the behavior in 2.14 an improvement over previous releases, but I agree that we should allow ThreadDeath to propagate in one way or another. > How are we gonna handle ThreadDeath in this case? In my proposal above, the AsyncAppender would catch it and warn/ignore as it catches all throwables, but the AppenderControl would allow ThreadDeath to propagate, bringing back 2.13.x behavior. The difference is that the AppenderControl can throw errors again, but background threads must be robust against them. This PR makes the AsyncAppender robust against Error by transitioning to another thread, but I'm not sure why we need to change threads instead of allowing the original thread to continue. > In summary, I am really reluctant to add any kind of `catch (Throwable ...)` clauses. I think the reason this strikes me as odd is that this implementation is already using a `catch (Throwable ...)`, but without writing it directly. We're using an uncaught exception handler to catch `Throwable` and handle it by logging to the statuslogger, and start another thread. If we explicitly write the try/catch instead, we can produce identical behavior without creating new thread > Any code pointers? The `catch Throwable` occurs outside log4j2, in the disruptor implementation: https://github.com/LMAX-Exchange/disruptor/blob/c1ea0d4a6d6144d338a9ff8cb0d46f2a7038b968/src/main/java/com/lmax/disruptor/BatchEventProcessor.java#L185-L190 I hope I'm not coming off too strong here, I really appreciate the work you've put into fixing this bug! ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: [email protected]
