Hi, We are evaluating storm with Trident to process events from Kafka in an exactly once manner so that the consumer application need not worry about the de-duplication logic.
One of the sample use case could be as below: 1) Read an event from Kafka 2) Parse the JSON 3) Process the data 4) Write it to dynamo db and Apache solr So I want to understand how does the Trident handles exactly once semantics when the processing updates one or more distributed systems like in this case. What happens if write to one of them fails. Also wish to know the implementation details of how Trident supports exactly once semantics. Thanks Ajay
