>
> I suspect that the log is not sent to KafkaProducer, but I am tracking the
> root cause.
>
> Thanks,
> Rui
>
>
>
> --
> *发件人:* Matt Sicker <boa...@gmail.com>
> *发送时间:* 2017年3月20日 15:25
> *收件人:* Log4J Users List
> *主题:*
is not sent to KafkaProducer, but I am tracking the root
cause.
Thanks,
Rui
发件人: Matt Sicker <boa...@gmail.com>
发送时间: 2017年3月20日 15:25
收件人: Log4J Users List
主题: Re: 答复: log4j2 issue
Which data were lost? Was it pending log messages that hadn't bee
be taken to avoid this situation
>>
>> The attachment is a configuration file:log4j2.xml
>>
>> Thanks,
>>
>> Rui
>> --
>> *发件人:* Matt Sicker <boa...@gmail.com>
>> *发送时间:* 2017年3月14日 15:19
>> *收件人:* Log4J Users List
>&
The attachment is a configuration file:log4j2.xml
>
> Thanks,
>
> Rui
> --
> *发件人:* Matt Sicker <boa...@gmail.com>
> *发送时间:* 2017年3月14日 15:19
> *收件人:* Log4J Users List
> *主题:* Re: log4j2 issue
>
> The gist of what you're probably looking
com>
发送时间: 2017年3月14日 15:19
收件人: Log4J Users List
主题: Re: log4j2 issue
The gist of what you're probably looking for is a failover appender
configuration: <
https://logging.apache.org/log4j/2.x/manual/appenders.html#FailoverAppender>.
This can be used to switch to another appender when one
If you don't care about old log messages that haven't been published yet
between times of Kafka availability, then yeah, discarding old messages
like that is an interesting workaround.
On 17 March 2017 at 08:58, Mikael Ståldal wrote:
> Have you tried to set
Have you tried to set blocking="false" on the AsyncAppender you have around
KafkaAppender?
Have you tried using the system properties log4j2.AsyncQueueFullPolicy and
log4j2.DiscardThreshold?
https://logging.apache.org/log4j/2.x/manual/configuration.html#log4j2.AsyncQueueFullPolicy
On Tue, Mar
The gist of what you're probably looking for is a failover appender
configuration: <
https://logging.apache.org/log4j/2.x/manual/appenders.html#FailoverAppender>.
This can be used to switch to another appender when one fails which is
perfect for networked appenders.
On 14 March 2017 at 07:00,
Hi,
I am Rui from China.
We use both of KafkaAppender (with a AsyncAppender wrapper) and FileAppender of
log4j2 with version 2.6.2 in the application.
Here is the scenaria, when kafka cluster down and stop service, the application
will slow down and wait for given timeout (request.timeout.ms)