[ https://issues.apache.org/jira/browse/KAFKA-10493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17331042#comment-17331042 ]
Matthias J. Sax commented on KAFKA-10493: ----------------------------------------- Not sure if I understand the idea about "restore by timestamp" – if compaction did delete the record with larger timestamp but lower offset, the record is gone, and only the out-of-order record is left. I do agree that versioned tables would also help on this issue, but I guess the bottom line question is about timeline. We had the idea to address this ticket in 3.0.0, but given the current discussion, I am not sure any longer if we should really do it, or if we would need to wait for other ticket to be address first? > KTable out-of-order updates are not being ignored > ------------------------------------------------- > > Key: KAFKA-10493 > URL: https://issues.apache.org/jira/browse/KAFKA-10493 > Project: Kafka > Issue Type: Bug > Components: streams > Affects Versions: 2.6.0 > Reporter: Pedro Gontijo > Assignee: Matthias J. Sax > Priority: Blocker > Fix For: 3.0.0 > > Attachments: KTableOutOfOrderBug.java > > > On a materialized KTable, out-of-order records for a given key (records which > timestamp are older than the current value in store) are not being ignored > but used to update the local store value and also being forwarded. > I believe the bug is here: > [https://github.com/apache/kafka/blob/2.6.0/streams/src/main/java/org/apache/kafka/streams/state/internals/ValueAndTimestampSerializer.java#L77] > It should return true, not false (see javadoc) > The bug impacts here: > [https://github.com/apache/kafka/blob/2.6.0/streams/src/main/java/org/apache/kafka/streams/kstream/internals/KTableSource.java#L142-L148] > I have attached a simple stream app that shows the issue happening. > Thank you! -- This message was sent by Atlassian Jira (v8.3.4#803005)