yihua commented on code in PR #10913:
URL: https://github.com/apache/hudi/pull/10913#discussion_r1539980247
##########
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieMetadataPayload.java:
##########
@@ -550,27 +552,34 @@ private Map<String, HoodieMetadataFileInfo>
combineFileSystemMetadata(HoodieMeta
// - First we merge records from all of the delta
log-files
// - Then we merge records from base-files with the delta
ones (coming as a result
// of the previous step)
- (oldFileInfo, newFileInfo) ->
- // NOTE: We can’t assume that MT update records will be
ordered the same way as actual
- // FS operations (since they are not atomic), therefore
MT record merging should be a
- // _commutative_ & _associative_ operation (ie one that
would work even in case records
- // will get re-ordered), which is
- // - Possible for file-sizes (since file-sizes will
ever grow, we can simply
- // take max of the old and new records)
- // - Not possible for is-deleted flags*
- //
- // *However, we’re assuming that the case of concurrent
write and deletion of the same
- // file is _impossible_ -- it would only be possible
with concurrent upsert and
- // rollback operation (affecting the same log-file),
which is implausible, b/c either
- // of the following have to be true:
- // - We’re appending to failed log-file (then the
other writer is trying to
- // rollback it concurrently, before it’s own write)
- // - Rollback (of completed instant) is running
concurrently with append (meaning
- // that restore is running concurrently with a write,
which is also nut supported
- // currently)
- newFileInfo.getIsDeleted()
- ? null
- : new
HoodieMetadataFileInfo(Math.max(newFileInfo.getSize(), oldFileInfo.getSize()),
false));
+ (oldFileInfo, newFileInfo) -> {
+ // NOTE: We can’t assume that MT update records will be ordered
the same way as actual
+ // FS operations (since they are not atomic), therefore MT
record merging should be a
+ // _commutative_ & _associative_ operation (ie one that
would work even in case records
+ // will get re-ordered), which is
+ // - Possible for file-sizes (since file-sizes will
ever grow, we can simply
+ // take max of the old and new records)
+ // - Not possible for is-deleted flags*
+ //
+ // *However, we’re assuming that the case of concurrent
write and deletion of the same
+ // file is _impossible_ -- it would only be possible with
concurrent upsert and
+ // rollback operation (affecting the same log-file), which
is implausible, b/c either
+ // of the following have to be true:
+ // - We’re appending to failed log-file (then the other
writer is trying to
+ // rollback it concurrently, before it’s own write)
+ // - Rollback (of completed instant) is running
concurrently with append (meaning
+ // that restore is running concurrently with a write,
which is also nut supported
+ // currently)
+ if (newFileInfo.getIsDeleted()) {
+ if (oldFileInfo.getIsDeleted()) {
+ LOG.warn("A file is repeatedly deleted in the files
partition of the metadata table: " + key);
Review Comment:
`newFileInfo` and `oldFileInfo` are particularly for one file only here,
identified by `key` which is the file name.
##########
hudi-client/hudi-spark-client/src/test/java/org/apache/hudi/client/functional/TestHoodieBackedTableMetadata.java:
##########
@@ -285,6 +297,112 @@ public void testMetadataRecordKeyExcludeFromPayload(final
HoodieTableType tableT
validateMetadata(testTable);
}
+ /**
+ * This tests the case where the two clean actions delete the same file and
commit
+ * to the metadata table. The metadata table should not contain the deleted
file afterwards.
+ * A new cleaner plan may contain the same file to delete if the previous
cleaner
+ * plan has not been successfully executed before the new one is scheduled.
+ */
+ @ParameterizedTest
+ @EnumSource(HoodieTableType.class)
Review Comment:
Since we're improving MDT in 1.x releases, these tests serve as extra guards
around the correctness.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]