Persistent Actor A consumes from Kafka and stores some events (let's call
them ProcessingRequested). Persistent Actor B runs a processing stream
whose source is tagged events from Persistent Actor A. As messages exit the
processing stream they are fed back to B which persists ProcessingSucceeded
or ProcessingFailed. Works like a charm.

However, we want to feed some failed elements back into the processing
stream in a resilient fashion. It occurred to us that we could tag
ProcessingFailed with the same tag as ProcessingRequested.

This feels nice and simple but also a little artificial. My sense is the
primary use case for eventsByTag is for merging the events of multiple
instances of an Aggregate.

I haven't mastered GraphStage, or AtLeastOnceDelivery or Source.queue,
which I imagine might offer an alternative design. Any comments?

-- 
>>>>>>>>>>      Read the docs: http://akka.io/docs/
>>>>>>>>>>      Check the FAQ: 
>>>>>>>>>> http://doc.akka.io/docs/akka/current/additional/faq.html
>>>>>>>>>>      Search the archives: https://groups.google.com/group/akka-user
--- 
You received this message because you are subscribed to the Google Groups "Akka 
User List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to akka-user+unsubscr...@googlegroups.com.
To post to this group, send email to akka-user@googlegroups.com.
Visit this group at https://groups.google.com/group/akka-user.
For more options, visit https://groups.google.com/d/optout.

Reply via email to