mjsax commented on code in PR #13364:
URL: https://github.com/apache/kafka/pull/13364#discussion_r1151296751


##########
streams/src/main/java/org/apache/kafka/streams/state/internals/RocksDBVersionedStoreSegmentValueFormatter.java:
##########
@@ -341,8 +345,10 @@ public void insertAsLatest(final long validFrom, final 
long validTo, final byte[
                 // detected inconsistency edge case where older segment has 
[a,b) while newer store
                 // has [a,c), due to [b,c) having failed to write to newer 
store.
                 // remove entries from this store until the overlap is 
resolved.

Review Comment:
   > The recovery code is more general, though, in that it also supports 
truncating multiple records and/or partial records, in case my above 
understanding about the types of failures which can occur is not accurate or 
changes in the future.
   
   Yes, that is what I figured -- wondering if it would be safe to make it more 
generic or not? -- The order in which we process records is not 100% 
deterministic (we could have some `s1.merge(s2).toTable()` for example and read 
from two partitions).
   
   Or in the "worst" case, somebody might write a non-deterministic custom 
`Processor`...
   
   Just want to double check your thoughts on this.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to