nsivabalan commented on a change in pull request #4739:
URL: https://github.com/apache/hudi/pull/4739#discussion_r802253573
##########
File path:
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieMetadataPayload.java
##########
@@ -362,28 +361,33 @@ private HoodieMetadataColumnStats
combineColumnStatsMetadatat(HoodieMetadataPayl
return filesystemMetadata.entrySet().stream().filter(e ->
e.getValue().getIsDeleted() == isDeleted);
}
- private Map<String, HoodieMetadataFileInfo>
combineFilesystemMetadata(HoodieMetadataPayload previousRecord) {
+ private Map<String, HoodieMetadataFileInfo>
combineFileSystemMetadata(HoodieMetadataPayload previousRecord) {
Map<String, HoodieMetadataFileInfo> combinedFileInfo = new HashMap<>();
+
+ // First, add all files listed in the previous record
if (previousRecord.filesystemMetadata != null) {
combinedFileInfo.putAll(previousRecord.filesystemMetadata);
}
+ // Second, merge in the files listed in the new record
if (filesystemMetadata != null) {
- filesystemMetadata.forEach((filename, fileInfo) -> {
- // If the filename wasnt present then we carry it forward
- if (!combinedFileInfo.containsKey(filename)) {
- combinedFileInfo.put(filename, fileInfo);
- } else {
- if (fileInfo.getIsDeleted()) {
- // file deletion
- combinedFileInfo.remove(filename);
- } else {
- // file appends.
- combinedFileInfo.merge(filename, fileInfo, (oldFileInfo,
newFileInfo) -> {
- return new HoodieMetadataFileInfo(oldFileInfo.getSize() +
newFileInfo.getSize(), false);
- });
- }
- }
+ validatePayload(type, filesystemMetadata);
+
+ filesystemMetadata.forEach((key, fileInfo) -> {
+ combinedFileInfo.merge(key, fileInfo,
+ // Combine previous record w/ the new one, new records taking
precedence over
+ // the old one
+ //
+ // NOTE: That if previous listing contains the file that is being
deleted by the tombstone
+ // record (`IsDeleted` = true) in the new one, we simply
delete the file from the resulting
+ // listing as well as drop the tombstone itself.
+ // However, if file is not present in the previous record we
have to persist tombstone
+ // record in the listing to make sure we carry forward
information that this file
+ // was deleted. This special case could occur since the
merging flow is 2-stage:
+ // - First we merge records from all of the delta
log-files
+ // - Then we merge records from base-files with the delta
ones (coming as a result
+ // of the previous step)
+ (oldFileInfo, newFileInfo) -> newFileInfo.getIsDeleted() ? null :
newFileInfo);
Review comment:
I agree. but the scenario which you quoted are unlikely to happen in
general.
a. since cleaner will not touch latest file slice where regular writes go
into (MOR table).
b. incase of COW rollback, there is no log file concept and hence a rollback
and a commit touching the same file is very unlikely.
So, two concurrent writes where one of them deletes while the other updates
or adds a new file : I can't think of a valid scenario. may be there is, we
need to think hard to see if we can come up w/ one.
But the scenario of rollback and concurrent updates to same file slice might
happen in MOR table.
also, the fix that is being considered may not cause any regression and
could possibly help thwart some of the scenarios. i.e taking the max file size
rather than latest file size. Do you see any issues specifically to go w/ max
of file sizes in merge function? Atleast for the rollback and regular writer in
MOR use-case, we have to make the fix. I don't think we can ignore the
use-case.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]