Github user uddhavarote commented on the issue:
https://github.com/apache/storm/pull/2790
@srdo Thanks for the details. Yeah, I am aware of the drawbacks. But I
think emitting this next tuple into a different stream, not `default` stream,
should not replay the tuples in case of failure. That way, we create disjoint
streams after the Kafka bolt.
The proposed topology looks good, however, say access to `Topic A` is not
available due to the authorization or only `RecordMetadata` is required, there
is no point in reading the whole topic. I think one would be better off
emitting the next tuple in the same topology rather than writing a new topology
to read the data from `RecordMetadata`.
I would like to suggest that the access to the `Tuple` be provided. along
with a note about the consequences in such a case.
---