Re: [akka-user] Akka Kafka Stream - only once delivery

2017-06-28 Thread Arun
Thanks MichaƂ. On Wednesday, June 28, 2017 at 4:15:30 PM UTC+5:30, Michal Borowiecki wrote: > > If you need exactly once semantics against your target database, the > common pattern is to store your last processed offset in that database > transactionally together with your output records,

Re: [akka-user] Akka Kafka Stream - only once delivery

2017-06-28 Thread 'Michal Borowiecki' via Akka User List
If you need exactly once semantics against your target database, the common pattern is to store your last processed offset in that database transactionally together with your output records, instead of committing back to kafka. On startup you'd read the last offset from your database and seek

[akka-user] Akka Kafka Stream - only once delivery

2017-06-28 Thread Arun
Hi, I am using Akka Kafka Consumer.committablePartitionedSource to stream messages from kafka and group them based on group key with groupedWithin . Grouped records should be sink into database and then it should able to commit offset. The code skeleton is as following: val