danny0405 commented on code in PR #9879:
URL: https://github.com/apache/hudi/pull/9879#discussion_r1363798774
##########
hudi-common/src/main/java/org/apache/hudi/common/table/log/AbstractHoodieLogRecordReader.java:
##########
@@ -241,7 +241,12 @@ private void scanInternalV1(Option<KeySpec> keySpecOpt) {
try {
// Iterate over the paths
logFormatReaderWrapper = new HoodieLogFormatReader(fs,
- logFilePaths.stream().map(logFile -> new HoodieLogFile(new
CachingPath(logFile))).collect(Collectors.toList()),
+ logFilePaths.stream()
+ .map(filePath -> new HoodieLogFile(new CachingPath(filePath)))
+ // hit an uncommitted file possibly from a failed write, skip
processing this one
+ .filter(logFile ->
completedInstantsTimeline.containsOrBeforeTimelineStarts(logFile.getDeltaCommitTime())
Review Comment:
Fix it by checking both the file name and the block instant. There might be
some minor performace regression for the double check, but do we have better
solution?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]