Does SinkTaskContext.timeout recall the connector's put method with the original collection of sink records?

2017-09-10 Thread Behrang Saeedzadeh
Hi, If a connector encounters a temporary error (e.g. exceeding throughput) and it calls the timeout(long) method, would it get passed the same set of records that had caused the call to the timeout(long) method or should it internally keep track of these records? Best regards, Behrang

Kafka streams application failed after a long time due to rocks db errors

2017-09-10 Thread Sachin Mittal
Hi, We have been running a clustered kafka streams application and say after 3 months or so of uninterrupted running few threads of couple of instances failed. We checked the logs and we found these two common stack traces pointing to underlying cause of fetch and put operations of rocksdb. Cause

Kafka Connect sink connector in distributed mode: how are records distributed to workers?

2017-09-10 Thread Behrang Saeedzadeh
Hi, How does Kafka Connect distribute records between workers for a sink connector when the connector is only configured to get data from one topic? * Does it ensure all records in a given partition are sent to the same worker instance? * When a new worker is added to the cluster, what steps are

Re: Kafka 11 | Stream Application crashed the brokers

2017-09-10 Thread Sameer Kumar
Hi Guozhang, Nope, I was not using exactly-once mode. I dont have the client logs with me right now, I will try to replicate it again and share the other details with you. My concern was that it crashed my brokers as well. -Sameer. On Sat, Sep 9, 2017 at 1:51 AM, Guozhang Wang