How to keep consumers alive without polling new messages

2016-09-27 Thread Yifan Ying
Hi all, 0.10 consumers use poll() method to heartbeat Kafka brokers. Is there any way that I can make the consumer heartbeat but not poll any messages? The javadoc says, the recommended way is to move message processing to another thread. But when message processing keeps failing(because a third

Fwd: Kafka Defunct Sockets

2016-09-27 Thread Magesh Kumar
Hi, This is Magesh working as a Engineer at Visa INc. I'm relatively new to the Kafka ecosystem. We are using Kafka 0.9 and during our testing in our test environments, we have noticed that producer does retries with NETWORK_EXCEPTION. To debug the issue, i enabled TRACE logging and noticed that

Spark per topic num of partitions doubt

2016-09-27 Thread Adis Ababa
Hello, I have asked the question on stackoverflow as well here http://stackoverflow.com/questions/39737201/spark-kafka-per-topic-number-of-partitions-map-not-honored I am confused about the "per topic number of partitions" parameter when creating a inputDstream using KafkaUtils.createStream(...)

Re: Schema Registry in Staging Environment

2016-09-27 Thread Ewen Cheslack-Postava
Lawrence, There are two common ways to approach registration of schemas. The first is to just rely on auto-registration that the serializers do (I'm assuming you're using the Java clients & serializers here, or an equivalent implementation in another language). In this case you can generally just

Kafka consumer receiving same message multiple times

2016-09-27 Thread Shamik Banerjee
Hi, I've recently started using kafka to read documents coming through a web crawler. What I'm noticing is when I'm dealing with few million documents, the consumer is processing the same message over and over again. Looks like the data is not getting committed for some reason. This is not

Re: micro-batching in kafka streams

2016-09-27 Thread Ara Ebrahimi
One more thing: Guozhang pointed me towards this sample for micro-batching: https://github.com/apache/kafka/blob/177b2d0bea76f270ec087ebe73431307c1aef5a1/streams/examples/src/main/java/org/apache/kafka/streams/examples/wordcount/WordCountProcessorDemo.java This is a good example and

Kafka consumer picking up the same message multiple times

2016-09-27 Thread Shamik Bandopadhyay
Hi, I've recently started using kafka to read documents coming through a web crawler. What I'm noticing is when I'm dealing with few million documents, the consumer is processing the same message over and over again. Looks like the data is not getting committed for some reason. This is not the

Re: Exception while deserializing in kafka streams

2016-09-27 Thread Walter rakoff
Ah, that was it. I was passing the same Serde while creating the topology. It works after I removed it. Thanks! Walter On Mon, Sep 26, 2016 at 1:16 PM, Guozhang Wang wrote: > Hi Walter, > > One thing I can think of is that, if you pass the serde object as part of > your

RE: SendFailedException

2016-09-27 Thread Martin Gainty
sometimes engineers run scripts out-of-order we will need the exact steps you are following:Are you running thru Virtualbox-Vagrant in which case we will need to see Vagrantfile.local file?https://www.codatlas.com/github.com/apache/kafka/trunk/vagrant/system-test-Vagrantfile.local We will also

Consumer offsets reset for _all_ topics after increasing partitions for one topic

2016-09-27 Thread Juho Autio
I increased partitions for one existing topic (2->10), but was surprised to see that it entirely reset the committed offsets of my consumer group. All topics & partitions were reset to the earliest offset available, and the consumer read everything again. Documentation doesn't mention anything

Re: producer can't push msg sometimes with 1 broker recoved

2016-09-27 Thread Kamal C
Aggie, I'm not able to re-produce your behavior in 0.10.0.1. > I did more testing and find the rule (Topic is created with "--replication-factor 2 --partitions 1" in following case): > node 1 node 2 > down(lead) down (replica) > down(replica) up (lead)

Re: Handling out-of-order messaging w/ Kafka Streams

2016-09-27 Thread Eno Thereska
Hi Mathieu, If the messages are sent asynchronously, then what you're observing is indeed right. There is no guarantee that the first will arrive at the destination first. Perhaps you can try sending them synchronously (i.e., wait until the first one is received, before sending the second).

intilisation of the contexte

2016-09-27 Thread Hamza HACHANI
Hi, i would like to know how in kafka streams the context is initilised. Because I 've a problem with one kafka-stream apllication. every time i call it i notice that the context is initilaised more than once or is created more than once which is abnormal and this cause a bug in the system.