Update this issue.
I update the log config to log.dirs=\\kafka-logs. The log file is deleted
but still can't delete the index file. I got below error message.
[2013-12-11 00:07:59,671] INFO Deleting index
d:\kafka-logs\test001-0\00507600.index (kafka.log.OffsetIndex)
[2013-12-11
Hi,
I am trying my hands on kafka 0.8. I have 3 kafka servers and 3
zookeepers running.With the number of partitions as 10 and replication
factor of 2, 4 producers were pushing data into kafka, each has their
own topic. There are 4 consumers which are getting the data from kafka.
The
Hello,
I am writing a simple program in Java using the Kafka 0.8.0 jar compiled
with Scala 2.10.
I have designed my program with a singleton class which holds a map of
(consumer group, ConsumerConnector) and a map of (topic, Producer).
This singleton class provides two methods, send(topic,
Hi,
I have a problem in fetching messages from Kafka. I am using simple
consumer API in Java to fetch messages from kafka ( the same one which is
stated in Kafka introduction example). The problem is that after a while
(could be 30min or couple of hours), the consumer does not receive any
Le 11/12/2013 10:34, Vincent Rischmann a écrit :
Hello,
I am writing a simple program in Java using the Kafka 0.8.0 jar
compiled with Scala 2.10.
I have designed my program with a singleton class which holds a map
of (consumer group, ConsumerConnector) and a map of (topic, Producer).
This
Hello,
No, the entire log file isn't bigger than that buffer size and this is
occurring while trying to retrieve the first message on the topic, not the last.
I attached a log. Line 408 ( Iterating.) is where we get an iterator
and start iterating over the data. There should be
Hi,
I have a problem in fetching messages from Kafka. I am using simple
consumer API in Java to fetch messages from kafka ( the same one which is
stated in Kafka introduction example). The problem is that after a while
(could be 30min or couple of hours), the consumer does not receive any
These numbers are a bit misleading. In Kafka, a topic partition is the
smallest unit that we distribute messages among consumers in the same
consumer group. So, if the number of consumers is larger than the total
number of partitions in a Kafka cluster, some consumers will never get any
data.
In
Yes, this seems to be a bug in javaapi, could you file a jira?
Normally, a consumer will create a stream once and keep iterating on the
stream. The connection to ZK happens when the consumer connector is
created. The connection to the brokers happens after the creation of the
stream.
Thanks,
Have you looked at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Myconsumerseemstohavestopped%2Cwhy%3F?
If that doesn't help, could you file a jira and attach your log?
Apache
mailing list doesn't support attachments.
Thanks,
Jun
On Wed, Dec 11, 2013 at 6:15 AM, Sybrandy, Casey
Have you looked at
https://cwiki.apache.org/confluence/display/KAFKA/FAQ#FAQ-Myconsumerseemstohavestopped%2Cwhy%3F?
Thanks,
Jun
On Wed, Dec 11, 2013 at 3:59 AM, shahab shahab.mok...@gmail.com wrote:
Hi,
I have a problem in fetching messages from Kafka. I am using simple
consumer API in
Le 11/12/2013 17:09, Jun Rao a écrit :
Yes, this seems to be a bug in javaapi, could you file a jira?
Normally, a consumer will create a stream once and keep iterating on the
stream. The connection to ZK happens when the consumer connector is
created. The connection to the brokers happens after
First, I saw the partial message looking at raw network traffic via Wireshark,
not the output of the iterator as the iterator never seems to provide me any
data. That's where the code is hanging.
Second, here's the output from the ConsumerOffsetChecker:
grp1,tdf_topic,0-0
Do you have compression turned on in the broker?
Guozhang
On Wed, Dec 11, 2013 at 8:43 AM, Sybrandy, Casey
casey.sybra...@six3systems.com wrote:
First, I saw the partial message looking at raw network traffic via
Wireshark, not the output of the iterator as the iterator never seems to
Actually, I think I isolated where the error may be. We have a library that
was recently updated to fix an issue. Other code using the same part of the
library is working properly, but for some reason in this case it isn't.
Apologies for wasting people's time, but I just never even thought
When using ZK to keep track of last offsets metrics etc., how do you know
when you are pushing your ZK cluster to its limit?
Or can ZK handle thousands of writes/reads per second no problem since it
is all in-memory? But even so, you need some idea on its upper limits and
how close you are to
I tried the sample code and it works. I also can delete the old index file
manually.
Thanks,
Liang Cui
2013/12/12 Jay Kreps jay.kr...@gmail.com
Is the path d:\kafka-logs\test001-0\00507600.index correct?
The tricky thing here is we don't have access to windows for testing so we
Hi,
In my application, the produce speed could be very high at some specific
time in a day while return
to a low speed at the rest of time. Frequently, my data logs are flushed
away before they are being
consumed by clients due to lacking disk space during the busy times.
Increasing consume
One possible approach is to change the retention policy on broker.
How large your messages can accumulate on brokers at peak time?
Guozhang
On Wed, Dec 11, 2013 at 9:09 PM, xingcan xingc...@gmail.com wrote:
Hi,
In my application, the produce speed could be very high at some specific
time
Guozhang,
Thanks for your prompt replay. I got two 300GB SAS disks for each borker.
At peak time, the produce speed for each broker is about 70MB/s. Apparently,
this speed is already restricted by network. While, the consume speed is
lower
for some topics are consumed by more than one group.
20 matches
Mail list logo