Class loading isolation is a known requested feature and we have plans to
add it in one of the forthcoming releases.

Re: the appenders, we should be seeing duplicate messages if there was an
issue there, but I'll double check.

Glad it worked after all.

Regards,
Konstantine

On Tue, Jan 17, 2017 at 4:34 PM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Related to what I discussed below, could there be a bug?
>
> For example, this line (for kafka):
> https://github.com/confluentinc/cp-docker-images/blob/master/debian/kafka/
> include/etc/confluent/docker/log4j.properties.template#L25
>
> looks different from this line (appends ,stdout) is that expected?
> https://github.com/confluentinc/cp-docker-images/blob/master/debian/kafka-
> connect-base/include/etc/confluent/docker/log4j.properties.template#L11
>
>
> Anyway, I figured out my issue… the connector I had created was using
> logback and scala logging. Somehow when the classes are loaded everything
> goes to crap and connect-log4j.properties is completely ignore.
> This should be set somewhere as a disclaimer. It’s been driving me crazy.
> I think it also comes from the risk that all the jars are loaded in the
> same JVM. Could that introduce version conflicts?
>
> Regards,
> Stephane
>
> On 18 January 2017 at 9:35:42 am, Stephane Maarek (
> steph...@simplemachines.com.au) wrote:
>
> Hi Konstantine,
>
> I appreciate you taking the time to respond
> So I have set CONNECT_LOG4J_ROOT_LEVEL=INFO and that’s the output I got
> below
> Now I understand I need to set CONNECT_LOG4J_LOGGERS also. Can I please
> have an example of how to set that value to suppress some debug statements?
>
> For Example, I tried CONNECT_LOG4_LOGGERS="org.
> reflections=INFO,org.apache.kafka=INFO” and yet I’m still seeing all the
> DEBUG statements… like
> 22:34:06.444 [CLASSPATH traversal thread.] DEBUG
> org.reflections.Reflections - could not scan file 
> unit/kafka/producer/ProducerTest.scala
> in url 
> file:/usr/bin/../share/java/kafka/kafka_2.11-0.10.1.0-cp2-test-sources.jar
> with scanner SubTypesScanner
>
> And it seems the bootstrap did take the variables into account as they get
> written successfully:
> root@6da04b77c18e:/# cat /etc/kafka/connect-log4j.properties
>
> log4j.rootLogger=INFO, stdout
>
> log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
>
> log4j.logger.org.reflections=INFO, stdout
> log4j.logger.org.apache.kafka=INFO, stdout
>
> I’m new to LOG4J properties so thanks for your help.
>
> Regards,
> Stephane
>
>
> On 18 January 2017 at 8:06:16 am, Konstantine Karantasis (
> konstant...@confluent.io) wrote:
>
> Hi Stephane,
>
> if you are using the docker images from confluent, a way to set the levels
> to specific loggers is described here:
>
> http://docs.confluent.io/3.1.1/cp-docker-images/docs/
> operations/logging.html#log4j-log-levels
>
> For Connect, you would need to set the environment variable
> CONNECT_LOG4J_LOGGERS in a similar way that KAFKA_LOG4J_LOGGERS is set in
> the "docker run" command described above.
>
> Regarding the redirection to stdout, if you are using Docker this is not
> configurable with the current templates because this allows you to view the
> logs for each container directly through docker via the command "docker
> logs <ContainerName>", which is the preferred way.
>
> Hope this helps,
> Konstantine
>
>
> On Mon, Jan 16, 2017 at 9:51 PM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
> > The kind of output is the following:
> >
> > 05:15:34.878 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name connections-closed:
> > 05:15:34.879 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name connections-created:
> > 05:15:34.880 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name bytes-sent-received:
> > 05:15:34.881 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name bytes-sent:
> > 05:15:34.882 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name bytes-received:
> > 05:15:34.882 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name select-time:
> > 05:15:34.884 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name io-time:
> > 05:15:34.905 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name heartbeat-latency
> > 05:15:34.906 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name join-latency
> > 05:15:34.907 [main] DEBUG org.apache.kafka.common.metrics.Metrics -
> Added
> > sensor with name sync-latency
> > 05:15:34.970 [DistributedHerder] DEBUG
> > org.apache.kafka.common.metrics.Metrics - Added sensor with name
> > connections-closed:
> > 05:15:34.971 [DistributedHerder] DEBUG
> > org.apache.kafka.common.metrics.Metrics - Added sensor with name
> > connections-created:
> > 05:15:34.971 [DistributedHerder] DEBUG
> > org.apache.kafka.common.metrics.Metrics - Added sensor with name
> > bytes-sent-received:
> > 05:15:34.972 [DistributedHerder] DEBUG
> > org.apache.kafka.common.metrics.Metrics - Added sensor with name
> > bytes-sent:
> > 05:15:34.975 [DistributedHerder] DEBUG
> > org.apache.kafka.common.metrics.Metrics - Added sensor with name
> > bytes-received:
> > 05:15:34.977 [DistributedHerder] DEBUG
> > org.apache.kafka.common.metrics.Metrics - Added sensor with name
> > select-time:
> > 05:15:35.990 [DistributedHerder] DEBUG
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> > kafka-connect-main has no committed offset for partition
> > _connect_offsets-39
> > 05:15:35.990 [DistributedHerder] DEBUG
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> > kafka-connect-main has no committed offset for partition
> _connect_offsets-6
> > 05:15:35.990 [DistributedHerder] DEBUG
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> > kafka-connect-main has no committed offset for partition
> > _connect_offsets-35
> > 05:15:35.990 [DistributedHerder] DEBUG
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> > kafka-connect-main has no committed offset for partition
> _connect_offsets-2
> > 05:15:35.990 [DistributedHerder] DEBUG
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> > kafka-connect-main has no committed offset for partition
> > _connect_offsets-31
> > 05:15:35.990 [DistributedHerder] DEBUG
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> > kafka-connect-main has no committed offset for partition
> > _connect_offsets-26
> > 05:15:35.990 [DistributedHerder] DEBUG
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> > kafka-connect-main has no committed offset for partition
> > _connect_offsets-22
> > 05:15:35.991 [DistributedHerder] DEBUG
> > org.apache.kafka.clients.consumer.internals.ConsumerCoordinator - Group
> > kafka-connect-main has no committed offset for partition
> > _connect_offsets-18
> > 05:46:58.401 [CLASSPATH traversal thread.] DEBUG
> > org.reflections.Reflections - could not scan file
> > groovy/ui/icons/page_copy.png in url
> > file:/usr/share/java/kafka-connect-hdfs/groovy-all-2.1.6.jar with
> scanner
> > TypeAnnotationsScanner
> > 05:46:58.401 [CLASSPATH traversal thread.] DEBUG
> > org.reflections.Reflections - could not scan file
> > groovy/ui/icons/page_copy.png in url
> > file:/usr/share/java/kafka-connect-hdfs/groovy-all-2.1.6.jar with
> scanner
> > SubTypesScanner
> >
> >
> > *How do I stop all these loggers?*
> >
> > That’s what my connect-log4j.properties looks like:
> >
> >
> > log4j.rootLogger=INFO, stdout
> >
> > log4j.appender.stdout=org.apache.log4j.ConsoleAppender
> > log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
> > log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c)%n
> >
> > log4j.logger.org.apache.kafka.clients.consumer=INFO, stdout
> >
> > I’m surprise because I set INFO at the rootLogger and it doesn’t seem to
> be
> > taken into account
> >
> >
> >
> >
> > On 16 January 2017 at 7:01:50 pm, Stephane Maarek (
> > steph...@simplemachines.com.au) wrote:
> >
> >
> > Hi,
> >
> > I created my own connector and I’m launching it in cluster mode, but
> every
> > DEBUG statement is still going to the console.
> > How can I control the log level of Kafka Connect and its associated
> > connectors? I’m using the confluent docker image btw
> >
> > Thanks
> > Stephane
> >
>
>

Reply via email to