Would also add that messages are only ack’d after being committed to NiFi’s repository, so failure to deliver them somewhere will not result in data loss, they would still be in Nifi and the failure can be handled by retrying or routing the failure somewhere else.
On Fri, Feb 14, 2020 at 7:55 PM Pierre Villard <[email protected]> wrote: > Hi Asmath, > > It's usually preferred to set the offset for a given consumer group at the > Kafka topic level rather than specifying it on the consumer side. But there > is a JIRA to allow specifying custom offset in the Kafka processors [1] but > no one is working on it as far as I can tell. > > Regarding error handling after the message is being consumed by the > processor, this is a use where NiFi Stateless [2] would make a lot of sense > since it transforms your flow into a single transaction and would ack the > message at ConsumeKafka level only once the data is successfully sent to > the database. > > [1] https://issues.apache.org/jira/browse/NIFI-4985 > [2] > https://github.com/apache/nifi/tree/master/nifi-nar-bundles/nifi-framework-bundle/nifi-framework/nifi-stateless > > Thanks, > Pierre > > Le ven. 14 févr. 2020 à 16:28, KhajaAsmath Mohammed < > [email protected]> a écrit : > >> Hi Community, >> >> I am looking on some information on how to access kafka offsets, keys and >> entire value of kafka message using consume record? >> >> Is there any other processor to consume messages from particular offset? >> >> Also, what happens if there is error after consuming messages. Assume >> there is database failure and we will loose those messages. Potential data >> loss in this case. >> >> Thanks, >> Asmath >> > -- Sent from Gmail Mobile
