To Know about Kafka and it's usage

2017-09-27 Thread Parth Patel
Hi, This is to know more about Kafka and how i can use it in my project. I am trying to learn about Big Data Engineering and came across Kafka. I am trying to develop an application which could take some real time data, filter it and show some visual outputs and would like to know where Kafka

Re: windowed store excessive memory consumption

2017-09-27 Thread Matthias J. Sax
>> I have a feeling that it would be helpful to add this to documentation >> examples as well as javadocs for all methods that do return iterators. That makes sense. Can you create a JIRA for this? Thanks. -Matthias On 9/27/17 2:54 PM, Stas Chizhov wrote: > Thanks, that comment actually mad

Re: windowed store excessive memory consumption

2017-09-27 Thread Stas Chizhov
Thanks, that comment actually mad its way to the documentation already. Apparently none of that was related. It was a leak - I was not closing an iterator that was returned by https://kafka.apache.org/0110/javadoc/org/apache/kafka/streams/state/ReadOnlyWindowStore.html#fetch(K,%20long,%20long)

Re: windowed store excessive memory consumption

2017-09-27 Thread Ted Yu
Have you seen this comment ? https://issues.apache.org/jira/browse/KAFKA-5122?focusedCommentId=15984467=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15984467 On Wed, Sep 27, 2017 at 12:44 PM, Stas Chizhov wrote: > Hi, > > I am running a simple

windowed store excessive memory consumption

2017-09-27 Thread Stas Chizhov
Hi, I am running a simple kafka streams app (0.11.0.1) that counts messages per hour per partition. The app runs in a docker container with a memory limit set, which is always reached by the app within few minutes and then container is killed. After running it with various number of instances,

Re: how to use Confluent connector with Apache Kafka

2017-09-27 Thread Matthias J. Sax
All connectors are compatible with vanilla AK, as Confluent Open Source ships with "plain" Apache Kafka under the hood. So you can just download the connector, plug it in, and configure it as any other connector, too. https://www.confluent.io/product/connectors/ -Matthias On 9/26/17 1:15 PM,

Re: out of order sequence number in exactly once streams

2017-09-27 Thread Matthias J. Sax
An OutOfOrderSequenceException should only occur if a idempotent producer gets out of sync with the broker. If you set `enable.idempotence = true` on your producer, you might want to set `retries = Integer.MAX_VALUE`. -Matthias On 9/26/17 11:30 PM, Sameer Kumar wrote: > Hi,  > > I again

Re: Debugging invalid_request response from a .10.2 server for list offset api using librdkafka client

2017-09-27 Thread Vignesh
I understand that it won't support it, my only concern is about the error code. Locally with these settings I get a message formatted error, 43 . Which makes sense. In one particular cluster we see an invalid request 42 instead of unsupported format 43. What are the implications of changing the

Re: Debugging invalid_request response from a .10.2 server for list offset api using librdkafka client

2017-09-27 Thread Hans Jespersen
The 0.8.1 protocol does not support target timestamps so it makes sense that you would get an invalid request error if the client is sending a Version 1 or Version 2 Offsets Request. The only Offset Request that a 0.8.1 broker knows how to handle is a Version 0 Offsets Request. >From

Re: Debugging invalid_request response from a .10.2 server for list offset api using librdkafka client

2017-09-27 Thread Vignesh
Correction in above mail, we get 42 - INVALID_REQUEST, not 43. Few other data points Server has following configs set inter.broker.protocol.version=0.8.1 log.message.format.version=0.8.1 My understanding is that we should get unsupported message format with above configurations, why do we

out of order sequence number in exactly once streams

2017-09-27 Thread Sameer Kumar
Hi, I again received this exception while running my streams app. I am using Kafka 11.0.1. After restarting my app, this error got fixed. I guess this might be due to bad network. Any pointers. Any config wherein I can configure it for retries. Exception trace is attached. Regards, -Sameer.

Re: How would Kafka behave in this scenario

2017-09-27 Thread Sameer Kumar
Yes, Steve. I guess the workaround is choose your min.insync.replicas wisely. Also, in case of produces with acks=all, producer after sufficient retries would fail eventually and streams apps would stall itself. But, they should resume when the brokers are fixed. -Sameer. On Tue, Sep 26, 2017 at