Hi All, I am thinking about how to best solve cases like https://issues.apache.org/jira/browse/LOG4J2-1955
In a nutshell: Log4j starts but some external resource is not available (like a JMS broker, a database server, and so on) Right now, you're out of luck. You other appenders will keep on happily logging but your JMS or JDBC appender will never log to their final destinations. Even, if, a second (or an hour) after the app is up, the external resource becomes available. A separate use case is what happens when the external resources goes down and back up after the app and Log4j have started. The appender usually looses its connection to that external resource and events are no logger sent to their final repositories. I just updated the JMS appender to handle the later use case. In brief, the appender detects that its manager has gone stale and recreates one. (Please have a look, a code review is welcome.) But when I think about the JMS broker not being up when the appender is created, then that's a different story. We'd need to allow for the appender to be created with a null manager and then handle that later, when append() is called. Today, that's a bit of a pain to do because the AbstractManager factory method throws a IllegalStateException, so you need to catch that, which is a bit ugly. Not going through the AbstractManager is possible but then you loose the caching it does. But stepping back I am wondering if we should not let Log4j itself try to create appenders at a later time if they were not successfully created on start up? I'm not sure which way to go here. On the one hand, each appender knows its stuff the best especially when it comes to its external resources. OTOH, having each appender get more complicated is not great. But making the Log4j core itself more complex might not be right either since each appender knows its stuff the best. Thoughts? I might go all the way with a JMS specific solution and see how that feels. Gary
