Hello Yuepeng, If it is okay to drop log events when the appender isn't keeping up, you can use a burst filter <https://logging.apache.org/log4j/2.x/manual/filters.html#BurstFilter>. If your burst/congestion periods are temporary and you don't want to lose events, you can consider employing an async. appender <https://logging.apache.org/log4j/2.x/manual/appenders/delegating.html#AsyncAppender> as a buffer.
Note that the Kafka appender <https://logging.apache.org/log4j/2.x/manual/appenders/message-queue.html#KafkaAppender> sadly needs some love. Due to lack of community interest and maintainer time, it is planned to be dropped in the next major release, i.e., Log4j 3. If you are actively using it, either consider migrating to an alternative, or step up as a maintainer, please. Kind regards. On Sun, Jan 26, 2025 at 12:09 PM Yuepeng Pan <panyuep...@apache.org> wrote: > Hi, masters.. > > > Recently, I have enabled the Kafka appender in certain scenarios to > collect logs, but we encountered an issue: > When the log generation speed exceeds the write speed of Kafka, > it negatively impacts the processing speed of core business logic because > the high-frequency log output is embedded within the core business logic. > > > May I know is there any available parameter for optimizing this issue? > > > Thank you~ > > > Best, > Yuepeng Pan