that the log is not sent to KafkaProducer, but I am tracking the
> root cause.
>
> Thanks,
> Rui
>
>
>
> ------
> *发件人:* Matt Sicker
> *发送时间:* 2017年3月20日 15:25
> *收件人:* Log4J Users List
> *主题:* Re: 答复: log4j2 issue
>
> Wh
at the log is not sent to KafkaProducer, but I am tracking the root
cause.
Thanks,
Rui
发件人: Matt Sicker
发送时间: 2017年3月20日 15:25
收件人: Log4J Users List
主题: Re: 答复: log4j2 issue
Which data were lost? Was it pending log messages that hadn't been sent to
the Ka
onfiguration file:log4j2.xml
>>
>> Thanks,
>>
>> Rui
>> --
>> *发件人:* Matt Sicker
>> *发送时间:* 2017年3月14日 15:19
>> *收件人:* Log4J Users List
>> *主题:* Re: log4j2 issue
>>
>> The gist of what you're probabl
The attachment is a configuration file:log4j2.xml
>
> Thanks,
>
> Rui
> --
> *发件人:* Matt Sicker
> *发送时间:* 2017年3月14日 15:19
> *收件人:* Log4J Users List
> *主题:* Re: log4j2 issue
>
> The gist of what you're probably looking for is a failover appender
> configurati
15:19
收件人: Log4J Users List
主题: Re: log4j2 issue
The gist of what you're probably looking for is a failover appender
configuration: <
https://logging.apache.org/log4j/2.x/manual/appenders.html#FailoverAppender>.
This can be used to switch to another appender when one fails which is
If you don't care about old log messages that haven't been published yet
between times of Kafka availability, then yeah, discarding old messages
like that is an interesting workaround.
On 17 March 2017 at 08:58, Mikael Ståldal wrote:
> Have you tried to set blocking="false" on the AsyncAppender
Have you tried to set blocking="false" on the AsyncAppender you have around
KafkaAppender?
Have you tried using the system properties log4j2.AsyncQueueFullPolicy and
log4j2.DiscardThreshold?
https://logging.apache.org/log4j/2.x/manual/configuration.html#log4j2.AsyncQueueFullPolicy
On Tue, Mar 14,
The gist of what you're probably looking for is a failover appender
configuration: <
https://logging.apache.org/log4j/2.x/manual/appenders.html#FailoverAppender>.
This can be used to switch to another appender when one fails which is
perfect for networked appenders.
On 14 March 2017 at 07:00, Yang
Hi,
I am Rui from China.
We use both of KafkaAppender (with a AsyncAppender wrapper) and FileAppender of
log4j2 with version 2.6.2 in the application.
Here is the scenaria, when kafka cluster down and stop service, the application
will slow down and wait for given timeout (request.timeout.ms)