I know I know, just brainfarting aloud... No water cooler ;-)

On Jul 10, 2017 11:50, "Ralph Goers" <[email protected]> wrote:

> How is that any different than creating a new manager as your recovery
> logic? Remember, the reason for doing this is a) the managers live across
> reconfigurations while appenders don’t and b) the appenders should be
> fairly simple - the managers deal with all these kinds of complexities. For
> example, there are 3 variations of the Flume Appender but there is only one
> FlumeAppender class. There are 3 Flume Managers to handle each
> implementation and the appender simply picks one based on how it was
> configured.
>
> Ralph
>
> > On Jul 10, 2017, at 11:15 AM, Gary Gregory <[email protected]>
> wrote:
> >
> > Another idea, possibly whacky, is for an Appender to have two managers.
> > When one goes bad, you initialize the 2nd based on the same factory data,
> > then close the 1st one. The 2nd becomes current, rinse, repeat. Not sure
> > how this fits in w manager cacheing.
> >
> > Gary
> >
> > On Jun 28, 2017 13:37, "Matt Sicker" <[email protected]> wrote:
> >
> >> This topic makes me think some sort of CircuitBreakerAppender may be
> useful
> >> as an analogue to FailoverAppender. Instead of permanently failing over
> to
> >> the backup appenders, this appender would eventually switch back to the
> >> primary appender when it's safely back up. Supporting a full open/half
> >> open/closed type circuit breaker would also be handy, and there's some
> >> sample implementation code that can be lifted from commons.
> >>
> >> On 26 June 2017 at 00:04, Ralph Goers <[email protected]>
> wrote:
> >>
> >>> Managers are not designed to be shutdown and restarted. If you are
> >> causing
> >>> your manager to come and go for the JMS support then I don’t think you
> >>> implemented it correctly. If you look at the tcp socket manager it has
> a
> >>> reconnector inside of it to handle retrying the connection. JMS should
> >> work
> >>> the same way. In a way, the Rolling appenders do the same thing.
> >> Whenever a
> >>> rollover occurs they start writing to a new file. This is handled by
> the
> >>> Manager.
> >>>
> >>> Appenders that are recoverable should successfully start. The managers
> >>> should be created but be in a state where they throw exceptions until
> >> they
> >>> can recover. The appenders should not really have much logic in them
> and
> >>> delegate most of the work to the managers.
> >>>
> >>> Ralph
> >>>
> >>>> On Jun 25, 2017, at 8:22 PM, Gary Gregory <[email protected]>
> >>> wrote:
> >>>>
> >>>> Hi All,
> >>>>
> >>>> I am thinking about how to best solve cases like
> >>>> https://issues.apache.org/jira/browse/LOG4J2-1955
> >>>>
> >>>> In a nutshell: Log4j starts but some external resource is not
> available
> >>>> (like a JMS broker, a database server, and so on)
> >>>>
> >>>> Right now, you're out of luck. You other appenders will keep on
> happily
> >>>> logging but your JMS or JDBC appender will never log to their final
> >>>> destinations. Even, if, a second (or an hour) after the app is up, the
> >>>> external resource becomes available.
> >>>>
> >>>> A separate use case is what happens when the external resources goes
> >> down
> >>>> and back up after the app and Log4j have started. The appender usually
> >>>> looses its connection to that external resource and events are no
> >> logger
> >>>> sent to their final repositories.
> >>>>
> >>>> I just updated the JMS appender to handle the later use case. In
> brief,
> >>> the
> >>>> appender detects that its manager has gone stale and recreates one.
> >>> (Please
> >>>> have a look, a code review is welcome.)
> >>>>
> >>>> But when I think about the JMS broker not being up when the appender
> is
> >>>> created, then that's a different story. We'd need to allow for the
> >>> appender
> >>>> to be created with a null manager and then handle that later, when
> >>> append()
> >>>> is called.
> >>>>
> >>>> Today, that's a bit of a pain to do because the AbstractManager
> factory
> >>>> method throws a IllegalStateException, so you need to catch that,
> which
> >>> is
> >>>> a bit ugly. Not going through the AbstractManager is possible but then
> >>> you
> >>>> loose the caching it does.
> >>>>
> >>>> But stepping back I am wondering if we should not let Log4j itself try
> >> to
> >>>> create appenders at a later time if they were not successfully created
> >> on
> >>>> start up?
> >>>>
> >>>> I'm not sure which way to go here. On the one hand, each appender
> knows
> >>> its
> >>>> stuff the best especially when it comes to its external resources.
> >> OTOH,
> >>>> having each appender get more complicated is not great. But making the
> >>>> Log4j core itself more complex might not be right either since each
> >>>> appender knows its stuff the best.
> >>>>
> >>>> Thoughts?
> >>>>
> >>>> I might go all the way with a JMS specific solution and see how that
> >>> feels.
> >>>>
> >>>> Gary
> >>>
> >>>
> >>>
> >>
> >>
> >> --
> >> Matt Sicker <[email protected]>
> >>
>
>
>

Reply via email to