Hi Huan,

What you need is a back-channel from the Cassandra Sink to the Kafka
Source. In Akka Streams the way to solve this is to create an explicit
backward channel with streams.

For example if you have an async Kafka API that gives back the result of
your read as a Future, together with some metadata usable for acking, you
can model it like this (pseudocode, I don't know how Kafka API looks like):

val kafkaReader: Flow[AckId, (MyData, AckId), Unit] =
Flow[AckId].mapAsync(parallelism = 4) { ack =>
  kafka.acknowledge(ack)
  kafka.readAsync() // Returns Future[(MayData, AckId)]
}


val cassandraWriter: Flow[(MyData, AckId), AckId, Unit] = Flow[(MayData,
AckId)].mapAsync(parallelism = 4) { case (data, ackid) =>
  cassandra.writeAsync(data).map(_ => ackid) // Returns Future[AckId]
}

kafkaReader.join(cassandraWriter).run() // Close the loop and start

The above code does not handle failed acknowledgments, you need to add that
handling yourself (for example instead of sending Ackid from cassandra
Flow, you send a Success(Ackid) or Failure(AckId)). Also, the above cycle
will not run, because there are no initial acknowledgments in the loop to
start it. You can do this by adding 4 dummy acknowledgments:

kafkaReader.join(cassandraWriter ++ Source(List(dummyAck1, ...))).run()

(or you can alternatively concat kafkaReader with a Source that injects 4
reads from Kafka)

-Endre

On Mon, May 25, 2015 at 8:32 PM, Juan José Vázquez Delgado <
[email protected]> wrote:

> Hello everyone,
>
> I'm starting with Akka Streams and so far everything is going well.
> However, I have met with a use case that I don't know how to approach. The
> scenario is a stream with an ActorPublisher as a source that is consuming
> messages from Kafka and a subscriber as a sink that updates a Cassandra
> table.
>
> Kafka ~> some mapping operations ~> Cassandra
>
> The point is that I'd like to explicitly confirm to Kafka every time a
> message has been successfully processed and inserted into Cassandra so that
> I could re-read the message in case a disaster happens and the service
> fails, i.e. some kind of at least once delivery behaviour. How could I
> approach this in terms of Akka Streams?. Is a supported scenario?.
>
> It's true that I always can configure the Kafka consumer with auto-commit
> behaviour but I'd rather take control of how I'm reading the messages.
>
> Thanks in advance for your help.
>
> --
> >>>>>>>>>> Read the docs: http://akka.io/docs/
> >>>>>>>>>> Check the FAQ:
> http://doc.akka.io/docs/akka/current/additional/faq.html
> >>>>>>>>>> Search the archives: https://groups.google.com/group/akka-user
> ---
> You received this message because you are subscribed to the Google Groups
> "Akka User List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/akka-user.
> For more options, visit https://groups.google.com/d/optout.
>



-- 
Akka Team
Typesafe - Reactive apps on the JVM
Blog: letitcrash.com
Twitter: @akkateam

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to