Hi,
Instead trying the PR,  make sure you are setting valid security protocol
and connecting to valid broker port.
also looks for any errors in producer logs.

Thanks,





On Fri, Sep 21, 2018 at 12:35 PM Shantanu Deshmukh <shantanu...@gmail.com>
wrote:

> Hi Manikumar,
>
> I checked this issue. There is this patch available.
> https://github.com/apache/kafka/pull/2408.patch
>
> I pulled Kafka 0.10.1.0 from github. Then tried applying this patch. But
> several places I am getting error that patch doesn't apply.
> I am new to git and patching process. Can you guide me here?
>
> On Wed, Sep 19, 2018 at 1:02 PM Manikumar <manikumar.re...@gmail.com>
> wrote:
>
> > Similar issue reported here:KAFKA-7304, but on broker side.  maybe you
> can
> > create a JIRA and upload the heap dump for analysis.
> >
> > On Wed, Sep 19, 2018 at 11:59 AM Shantanu Deshmukh <
> shantanu...@gmail.com>
> > wrote:
> >
> > > Any thoughts on this matter? Someone, please help.
> > >
> > > On Tue, Sep 18, 2018 at 6:05 PM Shantanu Deshmukh <
> shantanu...@gmail.com
> > >
> > > wrote:
> > >
> > > > Additionally, here's the producer config
> > > >
> > > > kafka.bootstrap.servers=x.x.x.x:9092,x.x.x.x:9092,x.x.x.x:9092
> > > > kafka.acks=0
> > > >
> > >
> >
> kafka.key.serializer=org.apache.kafka.common.serialization.StringSerializer
> > > >
> > > >
> > >
> >
> kafka.value.serializer=org.apache.kafka.common.serialization.StringSerializer
> > > > kafka.max.block.ms=1000
> > > > kafka.request.timeout.ms=1000
> > > > kafka.max.in.flight.requests.per.connection=1
> > > > kafka.retries=0
> > > > kafka.compression.type=gzip
> > > > kafka.security.protocol=SSL
> > > > kafka.ssl.truststore.location=/data/kafka/kafka-server-truststore.jks
> > > > kafka.ssl.truststore.password=XXXXXX
> > > > kafka.linger.ms=300
> > > > logger.level=INFO
> > > >
> > > > On Tue, Sep 18, 2018 at 5:36 PM Shantanu Deshmukh <
> > shantanu...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > >> Hello,
> > > >>
> > > >> We have a 3 broker Kafka 0.10.1.0 deployment in production. There
> are
> > > >> some applications which have Kafka Producers embedded in them which
> > send
> > > >> application logs to a topic. This topic has 10 partitions with
> > > replication
> > > >> factor of 3.
> > > >>
> > > >> We are observing that memory usage on some of these application
> > servers
> > > >> keep shooting through the roof intermittently. After taking heapdump
> > we
> > > >> found out that top suspects were:
> > > >> *---------------------*
> > > >>
> > > >>
> > > >> *org.apache.kafka.common.network.Selector -*
> > > >>
> > > >> occupies *352,519,104 (24.96%)* bytes. The memory is accumulated in
> > one
> > > >> instance of *"byte[]"* loaded by *"<system class loader>"*.
> > > >>
> > > >> *org.apache.kafka.common.network.KafkaChannel -*
> > > >>
> > > >> occupies *352,527,424 (24.96%)* bytes. The memory is accumulated in
> > one
> > > >> instance of *"byte[]"* loaded by *"<system class loader>"*
> > > >>
> > > >> * --------------------- *
> > > >>
> > > >> Both of these were holding about 352MB of space. 3 such instances,
> so
> > > >> they were consuming about 1.2GB of memory.
> > > >>
> > > >> Now regarding usage of producers. Not a huge amount of logs are
> being
> > > >> sent to Kafka cluster. It is about 200 msgs/sec. Only one producer
> > > object
> > > >> is being used throughout application. Async send function is used.
> > > >>
> > > >> What could be the cause of such huge memory usage? Is this some sort
> > of
> > > >> memory leak in this specific Kafka version?
> > > >>
> > > >>
> > >
> >
>

Reply via email to