ConsumerRecord#timestamp() similar to ConsumerRecord#key() and ConsumerRecord#value()
-Matthias On 5/28/18 11:22 PM, Shantanu Deshmukh wrote: > But then I wonder, why such things are not mentioned anywhere in Kafka > configuration document? I relied on that setting and it caused us some > issues. If it is mentioned clearly then everyone will be aware. Could you > please point in right direction about reading timestamp of log message? I > will see about implementing that solution in code. > > On Tue, May 29, 2018 at 11:37 AM Matthias J. Sax <matth...@confluent.io> > wrote: > >> Retention time is a lower bound for how long it is guaranteed that data >> will be stored. This guarantee work "one way" only. There is no >> guarantee when data will be deleted after the bound passed. >> >> However, client side, you can always check the record timestamp and just >> drop older data that is still in the topic. >> >> Hope this helps. >> >> >> -Matthias >> >> >> On 5/28/18 9:18 PM, Shantanu Deshmukh wrote: >>> Please help. >>> >>> On Mon, May 28, 2018 at 5:18 PM Shantanu Deshmukh <shantanu...@gmail.com >>> >>> wrote: >>> >>>> I have a topic otp-sms. I want that retention of this topic should be 5 >>>> minutes as OTPs are invalid post that amount of time. So I set >>>> retention.ms=300000. However, this was not working. So reading more in >>>> depth in Kafka configuration document I found another topic level >> setting >>>> that can be tuned for topic retention to work properly. So I set >>>> segment.ms=300000 as well. >>>> >>>> After these changes I saw that old logs go deleted. Still in the topic I >>>> could see one record which is more than 15 minutes old and not getting >>>> deleted. What does one have to do to actually delete messages generated >> n >>>> minutes ago? >>>> >>> >> >> >
signature.asc
Description: OpenPGP digital signature