thanks Joel and Jiangjie,
I have figured it out. In addition to my log4j 2 config file I also needed
a log4j 1 config file, then it works. Let me trace what happens when the
offsets are not committed and report back
On Wed, Jul 15, 2015 at 1:33 PM, Joel Koshy jjkosh...@gmail.com wrote:
- You
caught it, thanks for help!
any ideas what to do?
TRACE 2015-07-15 18:37:58,070 [chaos-akka.actor.jms-dispatcher-1019 ]
kafka.network.BoundedByteBufferSend - 113 bytes written.
ERROR 2015-07-15 18:37:58,078 [chaos-akka.actor.jms-dispatcher-1019 ]
kafka.consumer.ZookeeperConsumerConnector -
I am not sure how your project was setup. But I think it depends on what
log4j property file you specified when you started your application. Can
you check if you have log4j appender defined and the loggers are directed
to the correct appender?
Thanks,
Jiangjie (Becket) Qin
On 7/15/15, 8:10 AM,
- You can also change the log4j level dynamically via the
kafka.Log4jController mbean.
- You can also look at offset commit request metrics (mbeans) on the
broker (just to check if _any_ offset commits are coming through
during the period you see no moving offsets).
- The alternative is to
Is there anything on the broker log?
Is it possible that your client and broker are not running on the same
version?
Jiangjie (Becket) Qin
On 7/15/15, 11:40 AM, Vadim Bobrov vadimbob...@gmail.com wrote:
caught it, thanks for help!
any ideas what to do?
TRACE 2015-07-15 18:37:58,070
there are lots of files under logs directory of the broker, just in case I
checked all modified around the time of error and found nothing unusual
both client and broker are 0.8.2.1
could it have something to do with running it in the cloud? we are on
Linode and I remember having random
I’m not sure if it is related to running in cloud. Do you see this
disconnection issue always happening on committing offsets or it happens
randomly?
Jiangjie (becket) qin
On 7/15/15, 12:53 PM, Vadim Bobrov vadimbob...@gmail.com wrote:
there are lots of files under logs directory of the broker,
it is pretty random
On Wed, Jul 15, 2015 at 4:22 PM, Jiangjie Qin j...@linkedin.com.invalid
wrote:
I’m not sure if it is related to running in cloud. Do you see this
disconnection issue always happening on committing offsets or it happens
randomly?
Jiangjie (becket) qin
On 7/15/15, 12:53
If that is the case, I guess that might still be some value to try to run
broker and clients locally and see if the issue still exist.
Thanks,
Jiangjie (Becket) Qin
On 7/15/15, 1:23 PM, Vadim Bobrov vadimbob...@gmail.com wrote:
it is pretty random
On Wed, Jul 15, 2015 at 4:22 PM, Jiangjie Qin
Thanks Jiangjie,
unfortunately turning trace level on does not seem to work (any log level
actually) I am using log4j2 (through slf4j) and despite including log4j1
bridge and these lines:
Logger name=org.apache.kafka level=trace/
Logger name=kafka level=trace/
in my conf file I could not
I am trying to replace ActiveMQ with Kafka in our environment however I
have encountered a strange problem that basically prevents from using Kafka
in production. The problem is that sometimes the offsets are not committed.
I am using Kafka 0.8.2.1, offset storage = kafka, high level consumer,
Can you take a look at the kafka commit rate mbean on your consumer?
Also, can you consume the offsets topic while you are committing
offsets and see if/what offsets are getting committed?
(http://www.slideshare.net/jjkoshy/offset-management-in-kafka/32)
Thanks,
Joel
On Tue, Jul 14, 2015 at
Thanks, Joel, I will but regardless of my findings the basic problem will
still be there: there is no guarantee that the offsets will be committed
after commitOffsets. Because commitOffsets does not return its exit status,
nor does it block as I understand until offsets are committed. In other
Actually, how are you committing offsets? Are you using the old
(zookeeperconsumerconnector) or new KafkaConsumer?
It is true that the current APIs don't return any result, but it would
help to check if anything is getting into the offsets topic - unless
you are seeing errors in the logs, the
I am using ZookeeperConsumerConnector
actually I set up a consumer for __consumer_offsets the way you suggested
and now I cannot reproduce the situation any longer. Offsets are committed
every time.
On Tue, Jul 14, 2015 at 1:49 PM, Joel Koshy jjkosh...@gmail.com wrote:
Actually, how are you
just caught this error again. I issue commitOffsets - no error but no
committng offsets either. __consumer_offsets watching shows no new messages
either. Then in a few minutes I issue commitOffsets again - all committed.
Unless I am doing something terribly wrong this is very unreliable
On Tue,
I am trying to replace ActiveMQ with Kafka in our environment however I
have encountered a strange problem that basically prevents from using Kafka
in production. The problem is that sometimes the offsets are not committed.
I am using Kafka 0.8.2.1, offset storage = kafka, high level consumer,
17 matches
Mail list logo