Hi,Matt Sicker
I used FailOverAppender, but I found that in the moment kafka down, the data
was lost.
What kind of measures can be taken to avoid this situation
The attachment is a configuration file:log4j2.xml
Thanks,
Rui
________________________________
发件人: Matt Sicker <[email protected]>
发送时间: 2017年3月14日 15:19
收件人: Log4J Users List
主题: Re: log4j2 issue
The gist of what you're probably looking for is a failover appender
configuration: <
https://logging.apache.org/log4j/2.x/manual/appenders.html#FailoverAppender>.
This can be used to switch to another appender when one fails which is
perfect for networked appenders.
On 14 March 2017 at 07:00, Yang Rui <[email protected]> wrote:
> Hi,
>
> I am Rui from China.
>
> We use both of KafkaAppender (with a AsyncAppender wrapper)
> and FileAppender of log4j2 with version 2.6.2 in the application.
>
> Here is the scenaria, when kafka cluster down and stop
> service, the application will slow down and wait for given timeout (
> request.timeout.ms)
>
> to response finally (The bufferSize of AsyncKafka is reached).
>
> I am wondering if there is any solution that the
> fileAppender can always work normally without any performance issue which
> affected
>
> by KafkaAppender. In other words, the KafkaAppender can "
> DISCARD" the logs when kafka cluster down while the application
>
> can output the logs by FileAppender.
>
>
> Thanks,
> Rui
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: [email protected]
> For additional commands, e-mail: [email protected]
>
--
Matt Sicker <[email protected]>
<configuration>
<Properties>
<Property name="kafka-servers">ip:port</Property>
<Property name="log-path">/applog/logging</Property>
</Properties>
<Appenders>
<Kafka name="Kafka" topic="test_test" ignoreExceptions="false">
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%-5level] [%t] %c{1} - %msg%n" />
<Property name="bootstrap.servers">${kafka-servers}</Property>
<Property name="request.timeout.ms">3000</Property>
<Property name="max.block.ms">30000</Property>
<Property name="retries">0</Property>
<Property name="acks">0</Property>
</Kafka>
<Failover name="Failover" primary="Kafka" retryIntervalSeconds="30">
<Failovers>
<AppenderRef ref="RollingFile" />
</Failovers>
</Failover>
<RollingFile name="RollingFile" fileName="${log-path}/logging.log"
filePattern="${log-path}/$${date:yyyy-MM}/logging-%d{MM-dd-yyyy}-%i.log.gz">
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%-5level] [%t] %c{1} - %msg%n" />
<Policies>
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="20 MB" />
</Policies>
<DefaultRolloverStrategy max="100" />
</RollingFile>
<Console name="Console" target="SYSTEM_OUT" ignoreExceptions="false">
<PatternLayout
pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%-5level] [%t] %c{1} - %msg%n" />
</Console>
</Appenders>
<Loggers>
<Root level="INFO">
<AppenderRef ref="Failover" />
<AppenderRef ref="Console" />
</Root>
</Loggers>
</configuration>
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]