honored just the first time consumer starts and
polling? In other words everytime consumer starts does it start from the
beginning even if it has already read those messages?
On Wed, Dec 7, 2016 at 1:43 AM, Harald Kirsch
wrote:
Have you defined
auto.offset.reset: earliest
or otherwise made sure
Have you defined
auto.offset.reset: earliest
or otherwise made sure (KafkaConsumer.position()) that the consumer does
not just wait for *new* messages to arrive?
Harald.
On 06.12.2016 20:11, Mohit Anchlia wrote:
I see this message in the logs:
[2016-12-06 13:54:16,586] INFO [GroupCoordina
mers need to restart, I'm wondering if you can
restart other threads in your application but keep the consumer up and
running to avoid the rebalances.
On Tue, Dec 6, 2016 at 7:18 AM, Harald Kirsch wrote:
We have consumer processes which need to restart frequently, say, every 5
minutes. We have
We have consumer processes which need to restart frequently, say, every
5 minutes. We have 10 of them so we are facing two restarts every minute
on average.
1) It seems that nearly every time a consumer restarts the group is
rebalanced. Even if the restart takes less than the heartbeat interv
This sounds like you might want to run the Kafka broker on Windows. Have
a look at https://issues.apache.org/jira/browse/KAFKA-1194 for possible
issues with regard to log cleaning.
Regards,
Harald.
On 06.12.2016 00:50, Doyle, Keith wrote:
We’re beginning to make use of Kafka, and it is enc
I thought the name of a segment contains the offset of the first message
in that segment, but I just saw offsets being processed that would map
to a segment file that was cleaned and is listed as empty by dir.
This is 0.10.* on Windows.
Is there something strange going on with data still mappe
Hi all,
we are using apache-daemon (aka procrun) to run the Kafka broker as a
Windows service. This does not create a process with 'kafka.Kafka' in
the name such that bin\windows\kafka-server-stop.bat does not work.
Instead we use stop-service to shut down the service (=procrun=Kafka),
but t
Hi all,
there are so many timeouts to tweak mentioned in the documentation that
I wonder what the correct configuration for producer and consumer is to
survive a, say, 1 hour, broker shutdown.
With "survive" I mean that the processes are idle or blocked and keep
trying to send their data, an
There is hardly any way anyone can guess what happens there from just
the numbers.
What you should do is start Kafka with -XX:+HeapDumpOnOutOfMemoryError,
possibly even reduce the Xmx to 500MB and let it bomb out. Then you take
a look at the generated heap dump with the Eclipse Memory Analyzer
Hi all,
we just had a case with Kafka 0.9 where an index rebuild for ~200M
segments took on average 45 seconds. All indexes of a partition were
corrupt. There are 13 segments and the rebuild took 10 minutes.
After the rebuild, these are representative sizes:
% ll -h /data/xyz-0
-rw-r--r-- 1
Afaik, the current (most recent) segment is not touched by the cleaner.
Not sure if this might be the problem in your case.
Regards,
Harald.
On 11.08.2016 11:22, Christiane Lemke wrote:
Hello all,
@Tom - thank you for your answer :)
Here's the link to a gist with a minimal example:
https://g
Hi,
when starting the consumer you may want to call seekToEnd()
https://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seekToEnd(org.apache.kafka.common.TopicPartition...)
Harald.
On 11.08.2016 02:26, sat wrote:
Hi,
We have come up with Kafka consumer and
In case you are on Windows: compaction plainly does not work.
See https://issues.apache.org/jira/browse/KAFKA-1194
Root cause seems to be Windows' file locking and Kafka trying to
delete/rename files that are open in another thread/part of the broker.
Harald.
On 11.08.2016 07:38, David Yu wr
? There's some details here that a JDK bug in Java 7 causes the last
modified time to get updated on broker restart:
https://issues.apache.org/jira/browse/KAFKA-3802
On Fri, Aug 5, 2016 at 6:12 AM, Harald Kirsch
wrote:
Hi,
experimenting with log compaction, I see Kafka go through all the
Hi,
experimenting with log compaction, I see Kafka go through all the steps,
in particular I see positive messages in log-cleaner.log and *.deleted
files. Yet once the *.deleted segment files have disappeared, the
segment and index files with size 0 are still kept.
I stopped and restarted Ka
Hi all,
we are currently bitten by this bug
https://issues.apache.org/jira/browse/KAFKA-1194 whereby Kafka on
Windows cannot delete/compact the logs, because it has a file open while
trying to rename it.
This does not work on Windows. Its fine on *nixes. But we're stuck with
0.9 for now, so
16 matches
Mail list logo