[ 
https://issues.apache.org/jira/browse/KAFKA-17254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Krishna V Reddy updated KAFKA-17254:
------------------------------------
    Description: 
Kafka CLI tools and Kafka server for that matter no longer seem to respect 
*KAFKA_LOG4J_OPTS* environment variable starting from v3.7.1 and seems to 
always log data to console instead of writing logs to directory set in system 
property *kafka.logs.dir.* This regression only started happening from 3.7.1 as 
rolling back to 3.7.0 fixes this issue.

Here's an example where we list kafka topics using kafka CLI tool in v3.7.1:
{code:java}
bash-4.4$ bin/kafka-topics.sh --list --bootstrap-server localhost:21503 
--command-config /tmp/kafka/config/kraft/client.properties
00:11:29.415 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig -- 
AdminClientConfig values:
    auto.include.jmx.reporter = true
    bootstrap.controllers = []
    bootstrap.servers = [localhost:21503]
...
hello-world # Actual topic output
00:11:29.960 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient -- 
[AdminClient clientId=adminclient-1] Initiating close operation.
00:11:29.960 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient -- 
[AdminClient clientId=adminclient-1] Waiting for the I/O thread to exit. Hard 
shutdown in 31536000000 ms.
00:11:29.961 [kafka-admin-client-thread | adminclient-1] INFO 
org.apache.kafka.common.utils.AppInfoParser -- App info kafka.admin.client for 
adminclient-1 unregistered
00:11:29.967 [kafka-admin-client-thread | adminclient-1] INFO 
org.apache.kafka.common.metrics.Metrics -- Metrics scheduler closed
00:11:29.967 [kafka-admin-client-thread | adminclient-1] INFO 
org.apache.kafka.common.metrics.Metrics -- Closing reporter 
org.apache.kafka.common.metrics.JmxReporter
00:11:29.967 [kafka-admin-client-thread | adminclient-1] INFO 
org.apache.kafka.common.metrics.Metrics -- Metrics reporters closed
00:11:29.967 [kafka-admin-client-thread | adminclient-1] DEBUG 
org.apache.kafka.clients.admin.KafkaAdminClient -- [AdminClient 
clientId=adminclient-1] Exiting AdminClientRunnable thread.
00:11:29.968 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient -- 
[AdminClient clientId=adminclient-1] Kafka admin client closed. {code}
Running the same command in v3.7.0 gets us:
{code:java}
bash-4.4$ bin/kafka-topics.sh --list --bootstrap-server localhost:21503 
--command-config /tmp/kafka/config/kraft/client.properties
hello-world{code}
I added logging in *kafka-run-class.sh* to make sure the correct log4j 
properties file is being selected and sure enough the default 
*config/tools-log4j.properties* is selected by this script and the environment 
variable *KAFKA_LOG4J_OPTS* is set with this properties file:
{code:java}
# Log4j settings
if [ -z "$KAFKA_LOG4J_OPTS" ]; then
  # Log to console. This is a tool.
  LOG4J_DIR="$base_dir/config/tools-log4j.properties"
  # If Cygwin is detected, LOG4J_DIR is converted to Windows format.
  (( WINDOWS_OS_FORMAT )) && LOG4J_DIR=$(cygpath --path --mixed "${LOG4J_DIR}")
  KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_DIR}"
else
  # create logs directory
  if [ ! -d "$LOG_DIR" ]; then
    mkdir -p "$LOG_DIR"
  fi
fi{code}
The *config/tools-log4j.properties* used in both 3.7.0 and 3.7.1 is the same 
and contains the following contents(Kafka set defaults):
{code:java}
log4j.rootLogger=WARN, stderr
log4j.appender.stderr=org.apache.log4j.ConsoleAppender
log4j.appender.stderr.layout=org.apache.log4j.PatternLayout
log4j.appender.stderr.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.stderr.Target=System.err

# for connect-plugin-path
log4j.logger.org.reflections=ERROR{code}
This is a very bizarre bug since nothing changed from our end other than either 
upgrading to 3.7.1. We also tried upgrading to 3.8.0 to see if it goes away but 
it doesn't. What makes this worse is that the same issue also happens with 
Kafka server. Kafka server starting from 3.7.1 only seems to add logs to 
console and no longer respects *kafka.logs.dir* system property nor does it 
seem to be reading our log4j configuration file that we set like so:
{code:java}
export 
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${KAFKA_CONFIG_DIRECTORY}/log4j.properties"
 {code}
Our log4j properties file for our server looks like so:
{code:java}
log4j.rootLogger=INFO, stdout, kafkaAppender
log4j.appender.stdout=org.apache.log4j.ConsoleAppenderlog4j.appender.stdout.layout=org.apache.log4j.PatternLayoutlog4j.appender.stdout.layout.ConversionPattern=[%d]
 %p %m (%c)%n
log4j.appender.kafkaAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.kafkaAppender.MaxFileSize=1GBlog4j.appender.kafkaAppender.MaxBackupIndex=10log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.loglog4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.kafkaAppender.layout.ConversionPattern=[%d]
 %p %m (%c)%n
log4j.appender.stateChangeAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.stateChangeAppender.MaxFileSize=1GBlog4j.appender.stateChangeAppender.MaxBackupIndex=10log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.loglog4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.stateChangeAppender.layout.ConversionPattern=[%d]
 %p %m (%c)%n
log4j.appender.requestAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.requestAppender.MaxFileSize=1GBlog4j.appender.requestAppender.MaxBackupIndex=10log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.loglog4j.appender.requestAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.requestAppender.layout.ConversionPattern=[%d]
 %p %m (%c)%n
log4j.appender.cleanerAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.cleanerAppender.MaxFileSize=1GBlog4j.appender.cleanerAppender.MaxBackupIndex=10log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.loglog4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.cleanerAppender.layout.ConversionPattern=[%d]
 %p %m (%c)%n
log4j.appender.controllerAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.controllerAppender.MaxFileSize=1GBlog4j.appender.controllerAppender.MaxBackupIndex=10log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.loglog4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.controllerAppender.layout.ConversionPattern=[%d]
 %p %m (%c)%n
log4j.appender.authorizerAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.authorizerAppender.MaxFileSize=1GBlog4j.appender.authorizerAppender.MaxBackupIndex=10log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.loglog4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.authorizerAppender.layout.ConversionPattern=[%d]
 %p %m (%c)%n
# Change the line below to adjust ZK client 
logginglog4j.logger.org.apache.zookeeper=INFO
# Change the two lines below to adjust the general broker logging level (output 
to server.log and 
stdout)log4j.logger.kafka=INFOlog4j.logger.org.apache.kafka=INFO
# Change to DEBUG or TRACE to enable request 
logginglog4j.logger.kafka.request.logger=OFF, 
requestAppenderlog4j.additivity.kafka.request.logger=false
# Uncomment the lines below and change 
log4j.logger.kafka.network.RequestChannel$ to TRACE for additional output# 
related to the handling of requests#log4j.logger.kafka.network.Processor=TRACE, 
requestAppender#log4j.logger.kafka.server.KafkaApis=TRACE, 
requestAppender#log4j.additivity.kafka.server.KafkaApis=falselog4j.logger.kafka.network.RequestChannel$=WARN,
 requestAppenderlog4j.additivity.kafka.network.RequestChannel$=false
# Change the line below to adjust KRaft mode controller 
logginglog4j.logger.org.apache.kafka.controller=INFO, 
controllerAppenderlog4j.additivity.org.apache.kafka.controller=false
# Change the line below to adjust ZK mode controller 
logginglog4j.logger.kafka.controller=TRACE, 
controllerAppenderlog4j.additivity.kafka.controller=false
log4j.logger.kafka.log.LogCleaner=INFO, 
cleanerAppenderlog4j.additivity.kafka.log.LogCleaner=false
log4j.logger.state.change.logger=TRACE, 
stateChangeAppenderlog4j.additivity.state.change.logger=false
# Access denials are logged at INFO level, change to DEBUG to also log allowed 
accesseslog4j.logger.kafka.authorizer.logger=INFO, 
authorizerAppenderlog4j.additivity.kafka.authorizer.logger=false {code}
Again nothing changed here. It's the same code we have used since v3.5.0. Any 
ideas what's going wrong? Did 3.7.1 upgrade introduce any regressions or 
changes to Kafka logging?

P.S Our Kafka jars are directly downloaded from 
[https://kafka.apache.org/downloads] with no modifications to the binaries. We 
use the binaries as is.

  was:
Kafka CLI tools and Kafka server for that matter no longer seem to respect 
*KAFKA_LOG4J_OPTS* environment variable starting from v3.7.1 and seems to 
always log data to console instead of writing logs to directory set in system 
property *kafka.logs.dir.* This regression only started happening from 3.7.1 as 
rolling back to 3.7.0 fixes this issue.

Here's an example where we list kafka topics using kafka CLI tool in v3.7.1:

{code:java}
bash-4.4$ bin/kafka-topics.sh --list --bootstrap-server localhost:21503 
--command-config /tmp/kafka/config/kraft/client.properties
00:11:29.415 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig -- 
AdminClientConfig values:
    auto.include.jmx.reporter = true
    bootstrap.controllers = []
    bootstrap.servers = [localhost:21503]
...
hello-world # Actual topic output
00:11:29.960 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient -- 
[AdminClient clientId=adminclient-1] Initiating close operation.
00:11:29.960 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient -- 
[AdminClient clientId=adminclient-1] Waiting for the I/O thread to exit. Hard 
shutdown in 31536000000 ms.
00:11:29.961 [kafka-admin-client-thread | adminclient-1] INFO 
org.apache.kafka.common.utils.AppInfoParser -- App info kafka.admin.client for 
adminclient-1 unregistered
00:11:29.967 [kafka-admin-client-thread | adminclient-1] INFO 
org.apache.kafka.common.metrics.Metrics -- Metrics scheduler closed
00:11:29.967 [kafka-admin-client-thread | adminclient-1] INFO 
org.apache.kafka.common.metrics.Metrics -- Closing reporter 
org.apache.kafka.common.metrics.JmxReporter
00:11:29.967 [kafka-admin-client-thread | adminclient-1] INFO 
org.apache.kafka.common.metrics.Metrics -- Metrics reporters closed
00:11:29.967 [kafka-admin-client-thread | adminclient-1] DEBUG 
org.apache.kafka.clients.admin.KafkaAdminClient -- [AdminClient 
clientId=adminclient-1] Exiting AdminClientRunnable thread.
00:11:29.968 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient -- 
[AdminClient clientId=adminclient-1] Kafka admin client closed. {code}
Running the same command in v3.7.0 gets us:
{code:java}
bash-4.4$ bin/kafka-topics.sh --list --bootstrap-server localhost:21503 
--command-config /tmp/kafka/config/kraft/client.properties
hello-world{code}
I added logging in *kafka-run-class.sh* to make sure the correct log4j 
properties file is being selected and sure enough the default 
*config/tools-log4j.properties* is selected by this script and the environment 
variable *KAFKA_LOG4J_OPTS* is set with this properties file:
{code:java}
# Log4j settings
if [ -z "$KAFKA_LOG4J_OPTS" ]; then
  # Log to console. This is a tool.
  LOG4J_DIR="$base_dir/config/tools-log4j.properties"
  # If Cygwin is detected, LOG4J_DIR is converted to Windows format.
  (( WINDOWS_OS_FORMAT )) && LOG4J_DIR=$(cygpath --path --mixed "${LOG4J_DIR}")
  KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_DIR}"
else
  # create logs directory
  if [ ! -d "$LOG_DIR" ]; then
    mkdir -p "$LOG_DIR"
  fi
fi{code}
The *config/tools-log4j.properties* used in both 3.7.0 and 3.7.1 is the same 
and contains the following contents(Kafka set defaults):
{code:java}
log4j.rootLogger=WARN, stderr
log4j.appender.stderr=org.apache.log4j.ConsoleAppender
log4j.appender.stderr.layout=org.apache.log4j.PatternLayout
log4j.appender.stderr.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.stderr.Target=System.err

# for connect-plugin-path
log4j.logger.org.reflections=ERROR{code}
This is a very bizarre bug since nothing changed from our end other than either 
upgrading to 3.7.1. We also tried upgrading to 3.8.0 to see if it goes away but 
it doesn't. What makes this worse is that the same issue also happens with 
Kafka server. Kafka server starting from 3.7.1 only seems to add logs to 
console and no longer respects *kafka.logs.dir* system property nor does it 
seem to be reading our log4j configuration file that we set like so:

{code:java}
export 
KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${KAFKA_CONFIG_DIRECTORY}/log4j.properties"
 {code}
Our log4j properties file for our server looks like so:

{code:java}
log4j.rootLogger=INFO, stdout, kafkaAppender
log4j.appender.stdout=org.apache.log4j.ConsoleAppenderlog4j.appender.stdout.layout=org.apache.log4j.PatternLayoutlog4j.appender.stdout.layout.ConversionPattern=[%d]
 %p %m (%c)%n
log4j.appender.kafkaAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.kafkaAppender.MaxFileSize=1GBlog4j.appender.kafkaAppender.MaxBackupIndex=10log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.loglog4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.kafkaAppender.layout.ConversionPattern=[%d]
 %p %m (%c)%n
log4j.appender.stateChangeAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.stateChangeAppender.MaxFileSize=1GBlog4j.appender.stateChangeAppender.MaxBackupIndex=10log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.loglog4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.stateChangeAppender.layout.ConversionPattern=[%d]
 %p %m (%c)%n
log4j.appender.requestAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.requestAppender.MaxFileSize=1GBlog4j.appender.requestAppender.MaxBackupIndex=10log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.loglog4j.appender.requestAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.requestAppender.layout.ConversionPattern=[%d]
 %p %m (%c)%n
log4j.appender.cleanerAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.cleanerAppender.MaxFileSize=1GBlog4j.appender.cleanerAppender.MaxBackupIndex=10log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.loglog4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.cleanerAppender.layout.ConversionPattern=[%d]
 %p %m (%c)%n
log4j.appender.controllerAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.controllerAppender.MaxFileSize=1GBlog4j.appender.controllerAppender.MaxBackupIndex=10log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.loglog4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.controllerAppender.layout.ConversionPattern=[%d]
 %p %m (%c)%n
log4j.appender.authorizerAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.authorizerAppender.MaxFileSize=1GBlog4j.appender.authorizerAppender.MaxBackupIndex=10log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.loglog4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.authorizerAppender.layout.ConversionPattern=[%d]
 %p %m (%c)%n
# Change the line below to adjust ZK client 
logginglog4j.logger.org.apache.zookeeper=INFO
# Change the two lines below to adjust the general broker logging level (output 
to server.log and 
stdout)log4j.logger.kafka=INFOlog4j.logger.org.apache.kafka=INFO
# Change to DEBUG or TRACE to enable request 
logginglog4j.logger.kafka.request.logger=OFF, 
requestAppenderlog4j.additivity.kafka.request.logger=false
# Uncomment the lines below and change 
log4j.logger.kafka.network.RequestChannel$ to TRACE for additional output# 
related to the handling of requests#log4j.logger.kafka.network.Processor=TRACE, 
requestAppender#log4j.logger.kafka.server.KafkaApis=TRACE, 
requestAppender#log4j.additivity.kafka.server.KafkaApis=falselog4j.logger.kafka.network.RequestChannel$=WARN,
 requestAppenderlog4j.additivity.kafka.network.RequestChannel$=false
# Change the line below to adjust KRaft mode controller 
logginglog4j.logger.org.apache.kafka.controller=INFO, 
controllerAppenderlog4j.additivity.org.apache.kafka.controller=false
# Change the line below to adjust ZK mode controller 
logginglog4j.logger.kafka.controller=TRACE, 
controllerAppenderlog4j.additivity.kafka.controller=false
log4j.logger.kafka.log.LogCleaner=INFO, 
cleanerAppenderlog4j.additivity.kafka.log.LogCleaner=false
log4j.logger.state.change.logger=TRACE, 
stateChangeAppenderlog4j.additivity.state.change.logger=false
# Access denials are logged at INFO level, change to DEBUG to also log allowed 
accesseslog4j.logger.kafka.authorizer.logger=INFO, 
authorizerAppenderlog4j.additivity.kafka.authorizer.logger=false {code}
Again nothing changed here. It's the same code we have used since v3.5.0. Any 
ideas what's going wrong? Did 3.7.1 upgrade introduce any regressions or 
changes to Kafka logging? 

P.S Our Kafka jars are directly downloaded from 
[https://kafka.apache.org/downloads] with no modifications to the binaries. We 
use the binaries as is.

 

 


> Kafka Server/CLI no longer respects log4j.properties/tools-log4j.properties 
> file
> --------------------------------------------------------------------------------
>
>                 Key: KAFKA-17254
>                 URL: https://issues.apache.org/jira/browse/KAFKA-17254
>             Project: Kafka
>          Issue Type: Bug
>          Components: logging, tools
>    Affects Versions: 3.8.0, 3.7.1
>            Reporter: Krishna V Reddy
>            Priority: Major
>
> Kafka CLI tools and Kafka server for that matter no longer seem to respect 
> *KAFKA_LOG4J_OPTS* environment variable starting from v3.7.1 and seems to 
> always log data to console instead of writing logs to directory set in system 
> property *kafka.logs.dir.* This regression only started happening from 3.7.1 
> as rolling back to 3.7.0 fixes this issue.
> Here's an example where we list kafka topics using kafka CLI tool in v3.7.1:
> {code:java}
> bash-4.4$ bin/kafka-topics.sh --list --bootstrap-server localhost:21503 
> --command-config /tmp/kafka/config/kraft/client.properties
> 00:11:29.415 [main] INFO org.apache.kafka.clients.admin.AdminClientConfig -- 
> AdminClientConfig values:
>     auto.include.jmx.reporter = true
>     bootstrap.controllers = []
>     bootstrap.servers = [localhost:21503]
> ...
> hello-world # Actual topic output
> 00:11:29.960 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient -- 
> [AdminClient clientId=adminclient-1] Initiating close operation.
> 00:11:29.960 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient -- 
> [AdminClient clientId=adminclient-1] Waiting for the I/O thread to exit. Hard 
> shutdown in 31536000000 ms.
> 00:11:29.961 [kafka-admin-client-thread | adminclient-1] INFO 
> org.apache.kafka.common.utils.AppInfoParser -- App info kafka.admin.client 
> for adminclient-1 unregistered
> 00:11:29.967 [kafka-admin-client-thread | adminclient-1] INFO 
> org.apache.kafka.common.metrics.Metrics -- Metrics scheduler closed
> 00:11:29.967 [kafka-admin-client-thread | adminclient-1] INFO 
> org.apache.kafka.common.metrics.Metrics -- Closing reporter 
> org.apache.kafka.common.metrics.JmxReporter
> 00:11:29.967 [kafka-admin-client-thread | adminclient-1] INFO 
> org.apache.kafka.common.metrics.Metrics -- Metrics reporters closed
> 00:11:29.967 [kafka-admin-client-thread | adminclient-1] DEBUG 
> org.apache.kafka.clients.admin.KafkaAdminClient -- [AdminClient 
> clientId=adminclient-1] Exiting AdminClientRunnable thread.
> 00:11:29.968 [main] DEBUG org.apache.kafka.clients.admin.KafkaAdminClient -- 
> [AdminClient clientId=adminclient-1] Kafka admin client closed. {code}
> Running the same command in v3.7.0 gets us:
> {code:java}
> bash-4.4$ bin/kafka-topics.sh --list --bootstrap-server localhost:21503 
> --command-config /tmp/kafka/config/kraft/client.properties
> hello-world{code}
> I added logging in *kafka-run-class.sh* to make sure the correct log4j 
> properties file is being selected and sure enough the default 
> *config/tools-log4j.properties* is selected by this script and the 
> environment variable *KAFKA_LOG4J_OPTS* is set with this properties file:
> {code:java}
> # Log4j settings
> if [ -z "$KAFKA_LOG4J_OPTS" ]; then
>   # Log to console. This is a tool.
>   LOG4J_DIR="$base_dir/config/tools-log4j.properties"
>   # If Cygwin is detected, LOG4J_DIR is converted to Windows format.
>   (( WINDOWS_OS_FORMAT )) && LOG4J_DIR=$(cygpath --path --mixed 
> "${LOG4J_DIR}")
>   KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${LOG4J_DIR}"
> else
>   # create logs directory
>   if [ ! -d "$LOG_DIR" ]; then
>     mkdir -p "$LOG_DIR"
>   fi
> fi{code}
> The *config/tools-log4j.properties* used in both 3.7.0 and 3.7.1 is the same 
> and contains the following contents(Kafka set defaults):
> {code:java}
> log4j.rootLogger=WARN, stderr
> log4j.appender.stderr=org.apache.log4j.ConsoleAppender
> log4j.appender.stderr.layout=org.apache.log4j.PatternLayout
> log4j.appender.stderr.layout.ConversionPattern=[%d] %p %m (%c)%n
> log4j.appender.stderr.Target=System.err
> # for connect-plugin-path
> log4j.logger.org.reflections=ERROR{code}
> This is a very bizarre bug since nothing changed from our end other than 
> either upgrading to 3.7.1. We also tried upgrading to 3.8.0 to see if it goes 
> away but it doesn't. What makes this worse is that the same issue also 
> happens with Kafka server. Kafka server starting from 3.7.1 only seems to add 
> logs to console and no longer respects *kafka.logs.dir* system property nor 
> does it seem to be reading our log4j configuration file that we set like so:
> {code:java}
> export 
> KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:${KAFKA_CONFIG_DIRECTORY}/log4j.properties"
>  {code}
> Our log4j properties file for our server looks like so:
> {code:java}
> log4j.rootLogger=INFO, stdout, kafkaAppender
> log4j.appender.stdout=org.apache.log4j.ConsoleAppenderlog4j.appender.stdout.layout=org.apache.log4j.PatternLayoutlog4j.appender.stdout.layout.ConversionPattern=[%d]
>  %p %m (%c)%n
> log4j.appender.kafkaAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.kafkaAppender.MaxFileSize=1GBlog4j.appender.kafkaAppender.MaxBackupIndex=10log4j.appender.kafkaAppender.File=${kafka.logs.dir}/server.loglog4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.kafkaAppender.layout.ConversionPattern=[%d]
>  %p %m (%c)%n
> log4j.appender.stateChangeAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.stateChangeAppender.MaxFileSize=1GBlog4j.appender.stateChangeAppender.MaxBackupIndex=10log4j.appender.stateChangeAppender.File=${kafka.logs.dir}/state-change.loglog4j.appender.stateChangeAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.stateChangeAppender.layout.ConversionPattern=[%d]
>  %p %m (%c)%n
> log4j.appender.requestAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.requestAppender.MaxFileSize=1GBlog4j.appender.requestAppender.MaxBackupIndex=10log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.loglog4j.appender.requestAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.requestAppender.layout.ConversionPattern=[%d]
>  %p %m (%c)%n
> log4j.appender.cleanerAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.cleanerAppender.MaxFileSize=1GBlog4j.appender.cleanerAppender.MaxBackupIndex=10log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.loglog4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.cleanerAppender.layout.ConversionPattern=[%d]
>  %p %m (%c)%n
> log4j.appender.controllerAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.controllerAppender.MaxFileSize=1GBlog4j.appender.controllerAppender.MaxBackupIndex=10log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.loglog4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.controllerAppender.layout.ConversionPattern=[%d]
>  %p %m (%c)%n
> log4j.appender.authorizerAppender=org.apache.log4j.RollingFileAppenderlog4j.appender.authorizerAppender.MaxFileSize=1GBlog4j.appender.authorizerAppender.MaxBackupIndex=10log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.loglog4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayoutlog4j.appender.authorizerAppender.layout.ConversionPattern=[%d]
>  %p %m (%c)%n
> # Change the line below to adjust ZK client 
> logginglog4j.logger.org.apache.zookeeper=INFO
> # Change the two lines below to adjust the general broker logging level 
> (output to server.log and 
> stdout)log4j.logger.kafka=INFOlog4j.logger.org.apache.kafka=INFO
> # Change to DEBUG or TRACE to enable request 
> logginglog4j.logger.kafka.request.logger=OFF, 
> requestAppenderlog4j.additivity.kafka.request.logger=false
> # Uncomment the lines below and change 
> log4j.logger.kafka.network.RequestChannel$ to TRACE for additional output# 
> related to the handling of 
> requests#log4j.logger.kafka.network.Processor=TRACE, 
> requestAppender#log4j.logger.kafka.server.KafkaApis=TRACE, 
> requestAppender#log4j.additivity.kafka.server.KafkaApis=falselog4j.logger.kafka.network.RequestChannel$=WARN,
>  requestAppenderlog4j.additivity.kafka.network.RequestChannel$=false
> # Change the line below to adjust KRaft mode controller 
> logginglog4j.logger.org.apache.kafka.controller=INFO, 
> controllerAppenderlog4j.additivity.org.apache.kafka.controller=false
> # Change the line below to adjust ZK mode controller 
> logginglog4j.logger.kafka.controller=TRACE, 
> controllerAppenderlog4j.additivity.kafka.controller=false
> log4j.logger.kafka.log.LogCleaner=INFO, 
> cleanerAppenderlog4j.additivity.kafka.log.LogCleaner=false
> log4j.logger.state.change.logger=TRACE, 
> stateChangeAppenderlog4j.additivity.state.change.logger=false
> # Access denials are logged at INFO level, change to DEBUG to also log 
> allowed accesseslog4j.logger.kafka.authorizer.logger=INFO, 
> authorizerAppenderlog4j.additivity.kafka.authorizer.logger=false {code}
> Again nothing changed here. It's the same code we have used since v3.5.0. Any 
> ideas what's going wrong? Did 3.7.1 upgrade introduce any regressions or 
> changes to Kafka logging?
> P.S Our Kafka jars are directly downloaded from 
> [https://kafka.apache.org/downloads] with no modifications to the binaries. 
> We use the binaries as is.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to