Hi all,
0.10 consumers use poll() method to heartbeat Kafka brokers. Is there any
way that I can make the consumer heartbeat but not poll any messages? The
javadoc says, the recommended way is to move message processing to another
thread. But when message processing keeps failing(because a third
Hi,
This is Magesh working as a Engineer at Visa INc. I'm relatively new to the
Kafka ecosystem. We are using Kafka 0.9 and during our testing in our test
environments, we have noticed that producer does retries with
NETWORK_EXCEPTION.
To debug the issue, i enabled TRACE logging and noticed that
Hello,
I have asked the question on stackoverflow as well here
http://stackoverflow.com/questions/39737201/spark-kafka-per-topic-number-of-partitions-map-not-honored
I am confused about the "per topic number of partitions" parameter when
creating a inputDstream using KafkaUtils.createStream(...)
Lawrence,
There are two common ways to approach registration of schemas. The first is
to just rely on auto-registration that the serializers do (I'm assuming
you're using the Java clients & serializers here, or an equivalent
implementation in another language). In this case you can generally just
Hi,
I've recently started using kafka to read documents coming through a web
crawler. What I'm noticing is when I'm dealing with few million documents, the
consumer is processing the same message over and over again. Looks like the
data is not getting committed for some reason. This is not
One more thing:
Guozhang pointed me towards this sample for micro-batching:
https://github.com/apache/kafka/blob/177b2d0bea76f270ec087ebe73431307c1aef5a1/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountProcessorDemo.java
This is a good example and
Hi,
I've recently started using kafka to read documents coming through a web
crawler. What I'm noticing is when I'm dealing with few million documents,
the consumer is processing the same message over and over again. Looks like
the data is not getting committed for some reason. This is not the
Ah, that was it. I was passing the same Serde while creating the topology.
It works after I removed it.
Thanks!
Walter
On Mon, Sep 26, 2016 at 1:16 PM, Guozhang Wang wrote:
> Hi Walter,
>
> One thing I can think of is that, if you pass the serde object as part of
> your
sometimes engineers run scripts out-of-order we will need the exact steps you
are following:Are you running thru Virtualbox-Vagrant in which case we will
need to see Vagrantfile.local
file?https://www.codatlas.com/github.com/apache/kafka/trunk/vagrant/system-test-Vagrantfile.local
We will also
I increased partitions for one existing topic (2->10), but was surprised to
see that it entirely reset the committed offsets of my consumer group.
All topics & partitions were reset to the earliest offset available, and
the consumer read everything again.
Documentation doesn't mention anything
Aggie,
I'm not able to re-produce your behavior in 0.10.0.1.
> I did more testing and find the rule (Topic is created with
"--replication-factor 2 --partitions 1" in following case):
> node 1 node 2
> down(lead) down (replica)
> down(replica) up (lead)
Hi Mathieu,
If the messages are sent asynchronously, then what you're observing is indeed
right. There is no guarantee that the first will arrive at the destination
first.
Perhaps you can try sending them synchronously (i.e., wait until the first one
is received, before sending the second).
Hi,
i would like to know how in kafka streams the context is initilised.
Because I 've a problem with one kafka-stream apllication. every time i call it
i notice that the context is initilaised more than once or is created more
than once which is abnormal and this cause a bug in the system.
13 matches
Mail list logo