Thanks Guozhang!!
Below is the code for iterating over log messages:
.
.
for (final KafkaStream stream : streams) {
ConsumerIteratorbyte[], byte[] consumerIte =
Hi
It seems to me that 0.8.1.1 doesn't have the ConsumerMetadata API. So what
broker I should choose in order to commit and fetch offset information ?
Shall I use zookeeper for offset to manage it manually instead ?
Thanks,
Weide
On Sun, Aug 3, 2014 at 4:34 PM, Weide Zhang weo...@gmail.com
Bhavesh, take a look at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Whyisdatanotevenlydistributedamongpartitionswhenapartitioningkeyisnotspecified
?
Maybe the root cause issue is something else? Even if producers produce
more or less than what they are producing you should be able
Is it possible there is another solution to the problem? I think if you
could better describe the problem(s) you are facing and how you are
architected some then you may get responses from others that perhaps have
faced the same problem with similar architectures ... or maybe folks can
chime in
Hi, everyone.
I'm using 0.8.1.1, and I have 8 brokers and 3 topics each have 16
partitions and 3 replicas.
I got unseen logs like below. this is occur every 5 seconds.
[2014-08-05 11:11:32,478] INFO conflict in /brokers/ids/2 data:
Hi, every one.
I got into a strange case that my consumer using high level api worked fine
at first, but couple days later blocked in ConsumerIterator.hasNext(),
while there are pending messages on the topic: with
kafka-console-consumer.sh I can see continuous messages.
Then i connect to
Hi,
I just started with Apache Kafka and wrote a high level consumer program
following the example given here
https://cwiki.apache.org/confluence/display/KAFKA/Consumer+Group+Example.
Though, I was able to run the program and consume the messages, I have one
doubt regarding