Hi Rajib,

Generally, it's best to let Kafka handle the offset management.
Under normal circumstances, when you restart a consumer, it will start
reading records from the last committed offset, there's no need for you to
manage that process yourself.
If you need manually commit records vs. using auto-commit, then you can use
one of the commit API methods
commitSync
<https://kafka.apache.org/25/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#commitSync-->
 or commitAsync
<https://kafka.apache.org/25/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#commitAsync-org.apache.kafka.clients.consumer.OffsetCommitCallback->
.

-Bill


On Mon, May 11, 2020 at 9:52 PM Rajib Deb <rajib_...@infosys.com> wrote:

> Hi, I wanted to know if it is a good practice to develop a custom offset
> management method while consuming from Kafka. I am thinking to develop it
> as below.
>
>
>   1.  Create a PartitionInfo named tuple as below
>
> PartitionInfo("PartitionInfo",["header","custom writer","offset"]
>
>   1.  Then populate the tuple with the header, writer and last offset
> details
>   2.  Write the tuple in a file/database once the consumer commits the
> message
>   3.  Next time when consumer starts, it checks the last offset and reads
> from there
>
> Thanks
> Rajib
>
>

Reply via email to