nsivabalan opened a new pull request, #7490: URL: https://github.com/apache/hudi/pull/7490
### Change Logs Metadata table could deem some invalid data as valid in some rare conditions. Especially when there are partially failed commits in MDT and the commit that failed refers to compaction or clustering in data table, we might see some anomalies. Scenarios where this could fail w/ inline compaction. Data table timeline t1.dc t2.comp.req. |Crash t3.dc t2.comp.inflight t2.commit MDT timeline t1.dc. t2.comp.inflight |Crash t3.dc t4.rb(t2) t2.dc The first attempt of t2 in MDT should be rolled back since it crashed mid-way. in other words, if there are any log blocks written by t2 in MDT, it should be deemed invalid. But what happens is, here is how the log blocks are laid out. log1(t1). log2(t2 first attempt) crash.... log3 (t3) log4(t4.rb rolling back t2) ... log5 (t2) So, when we read the log blocks via AbstractLogRecordReader, ideally we want to ignore log2. but when we encounter log4 for a rollback block, we only check the previous log block for matching commit to rollback. since it does not match w/ t2, we assume log4 is a duplicate rollback and hence still deem log2 as a valid log block. hence MDT could serve more data files which are not valid from a FS based listing standpoint. Fix: switching failed writes cleaning policy in MDT to EAGER will solve this issue. ### Impact Stabilized Metadata table. ### Risk level (write none, low medium or high below) low. ### Documentation Update N/A ### Contributor's checklist - [ ] Read through [contributor's guide](https://hudi.apache.org/contribute/how-to-contribute) - [ ] Change Logs and Impact were stated clearly - [ ] Adequate tests were added if applicable - [ ] CI passed -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
