Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Pushkar Deole
Hi Lerh Chuan Low, MAny thanks for your response. I get it now, that it provides exactly-once semantics i.e it looks to user that it is processed exactly once. Also, i am clear on the aspect about read_committed level so the uncommitted transaction and hence uncommitted send won't be visible to

Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Lerh Chuan Low
Pushkar, My understanding is you can easily turn it on by using Kafka streams as Chris mentioned. Otherwise you'd have to do it yourself - I don't think you can get exactly once processing, but what you can do (which is also what Kafka streams does) is exactly once schematics (You won't be able

Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Ran Lupovich
Another acceptable solution is doing idempotent actions while if you re read the message again you will check "did I process it already?" Or doing upsert... and keep it in at least once semantics בתאריך יום ו׳, 16 ביולי 2021, 19:10, מאת Ran Lupovich ‏< ranlupov...@gmail.com>: > You need to do

Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Ran Lupovich
You need to do atomic actions with processing and saving the partition/offsets , while rebalance or assign or on initial start events you read the offset from the outside store, there are documentation and examples on the internet, what type of processing are you doing ? בתאריך יום ו׳, 16 ביולי

Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Pushkar Deole
Chris, I am not sure how this solves the problem scenario that we are experiencing in customer environment: the scenario is: 1. application consumed record and processed it 2. the processed record is produced on destination topic and ack is received 3. Before committing offset back to consumed

Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Chris Larsen
It is not possible out of the box, it is something you’ll have to write yourself. Would the following work? Consume -> Produce to primary topic-> get success ack back -> commit the consume Else if ack fails, produce to dead letter, then commit upon success Else if dead letter ack fails, exit

Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Pushkar Deole
Thanks Chris for the response! The current application is quite evolved and currently using consumer-producer model described above and we need to fix some bugs soon for a customer. So, moving to kafka streams seems bigger work. That's why looking at work around if same thing can be achieved with

Re: Zookeeper : Throttling connections

2021-07-16 Thread Kafka Life
Thank you very much Mr. Israel Ekpo. Really appreciate it. We are using the 0.10 version of kafka and in the process of upgrading to 2.6.1 . Planning in process and Yes, these connections to zookeepers are for Kafka functionality. frequently there are incidents where zookeepers get bombarded

Re: Zookeeper : Throttling connections

2021-07-16 Thread Israel Ekpo
Hello, I am assuming you are using Zookeeper because of your Kafka brokers. What version of Kafka are you using. I would like to start by stating that very soon this will no longer be an issue as the project is taking steps to decouple Kafka from Zookeeper. Take a look at KIP-500 for additional

Re: Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Chris Larsen
Pushkar, in kafka development for customer consumer/producer you handle it. However you can ensure the process stops (or sends message to dead letter) before manually committing the consumer offset. On the produce side you can turn on idempotence or transactions. But unless you are using Streams,

Is exactly-once possible with kafka consumer-producer ?

2021-07-16 Thread Pushkar Deole
Hi All, I am using a normal kafka consumer-producer in my microservice, with a simple model of consume from source topic -> process the record -> produce on destination topic. I am mainly looking for exactly-once guarantee wherein the offset commit to consumed topic and produce on destination

Zookeeper : Throttling connections

2021-07-16 Thread Kafka Life
Dear KAFKA & Zookeeper experts. 1/ What is zookeeper Throttling ? Is it done at zookeepr ? How is it set configured ? 2/ Is it helpful ?

Re: Kafka 2.7.1 Rebalance failed DisconnectException

2021-07-16 Thread Tony John
Hi All, An update on this. Finally I could figure out the cause for this. I have a consumer with *MAX_POLL_INTERVAL_MS_CONFIG* set to *Integer.MAX_VALUE*, which was causing the problem. Looks like its a combination of *group.initial.rebalance.delay.ms * in