vinothchandar commented on a change in pull request #3210:
URL: https://github.com/apache/hudi/pull/3210#discussion_r665906425
##########
File path:
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieMetadataFileSystemView.java
##########
@@ -73,4 +73,16 @@ public void close() {
throw new HoodieException("Error closing metadata file system view.", e);
}
}
+
+ @Override
+ public void sync() {
+ // Sync the tableMetadata first as super.sync() may call listPartition
+ if (tableMetadata != null) {
+ BaseTableMetadata baseMetadata = (BaseTableMetadata)tableMetadata;
Review comment:
nit: space after `)`. Suprised that checkstyle is happy
##########
File path:
hudi-common/src/main/java/org/apache/hudi/metadata/BaseTableMetadata.java
##########
@@ -322,8 +303,22 @@ protected HoodieEngineContext getEngineContext() {
return engineContext != null ? engineContext : new
HoodieLocalEngineContext(hadoopConf.get());
}
+ public HoodieMetadataConfig getMetadataConfig() {
+ return metadataConfig;
+ }
+
protected String getLatestDatasetInstantTime() {
return
datasetMetaClient.getActiveTimeline().filterCompletedInstants().lastInstant()
.map(HoodieInstant::getTimestamp).orElse(SOLO_COMMIT_TIMESTAMP);
}
+
+ @Override
+ public Option<String> getSyncedInstantTimeForReader() {
+ if (timelineMergedMetadata == null) {
Review comment:
Just `return
Option.ofNullable(timelineMergedMetadata).map(TimelineMergedTableMetadata::getSyncedInstantTime())`
? Meta point: We can just use the Option APIs like this without explicit
checks.
##########
File path:
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/metadata/HoodieBackedTableMetadataWriter.java
##########
@@ -137,6 +138,14 @@ protected HoodieBackedTableMetadataWriter(Configuration
hadoopConf, HoodieWriteC
private HoodieWriteConfig createMetadataWriteConfig(HoodieWriteConfig
writeConfig) {
int parallelism = writeConfig.getMetadataInsertParallelism();
+ // The metadata table timeline should always encompass the dataset
timeline so that syncing of manual rollbacks can
+ // always check whether the rolled-back-instant was ever synced to the
metadata table. Since all actions on the
+ // dataset always lead to a delta-commit on the Metadata Table, we need to
preserve enough deltacommits that can
+ // cover total number of all the non-archived actions on the dataset.
+ int multiplier = HoodieTimeline.VALID_ACTIONS_IN_TIMELINE.length;
Review comment:
why is this multiplier chosen? It's not guaranteed that the timeline
will have all valid actions at once right?
##########
File path:
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieMetadataFileSystemView.java
##########
@@ -73,4 +73,16 @@ public void close() {
throw new HoodieException("Error closing metadata file system view.", e);
}
}
+
+ @Override
+ public void sync() {
Review comment:
IIUC this is to handle the file system view refreshes
##########
File path:
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/metadata/HoodieBackedTableMetadataWriter.java
##########
@@ -137,6 +138,14 @@ protected HoodieBackedTableMetadataWriter(Configuration
hadoopConf, HoodieWriteC
private HoodieWriteConfig createMetadataWriteConfig(HoodieWriteConfig
writeConfig) {
int parallelism = writeConfig.getMetadataInsertParallelism();
+ // The metadata table timeline should always encompass the dataset
timeline so that syncing of manual rollbacks can
+ // always check whether the rolled-back-instant was ever synced to the
metadata table. Since all actions on the
+ // dataset always lead to a delta-commit on the Metadata Table, we need to
preserve enough deltacommits that can
+ // cover total number of all the non-archived actions on the dataset.
+ int multiplier = HoodieTimeline.VALID_ACTIONS_IN_TIMELINE.length;
Review comment:
I wonder if say 8x-ing the retention of the metadata timeline is really
necessary.
##########
File path:
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieTableMetadataUtil.java
##########
@@ -236,30 +239,50 @@
*
* During a rollback files may be deleted (COW, MOR) or rollback blocks be
appended (MOR only) to files. This
* function will extract this change file for each partition.
- *
+ * @param metadataTableTimeline Current timeline of the Metdata Table
* @param rollbackMetadata {@code HoodieRollbackMetadata}
* @param partitionToDeletedFiles The {@code Map} to fill with files deleted
per partition.
* @param partitionToAppendedFiles The {@code Map} to fill with files
appended per partition and their sizes.
*/
- private static void processRollbackMetadata(HoodieRollbackMetadata
rollbackMetadata,
+ private static void processRollbackMetadata(HoodieTimeline
metadataTableTimeline, HoodieRollbackMetadata rollbackMetadata,
Map<String, List<String>>
partitionToDeletedFiles,
Map<String, Map<String, Long>>
partitionToAppendedFiles,
Option<String> lastSyncTs) {
rollbackMetadata.getPartitionMetadata().values().forEach(pm -> {
+ final String instantToRollback =
rollbackMetadata.getCommitsRollback().get(0);
// Has this rollback produced new files?
boolean hasRollbackLogFiles = pm.getRollbackLogFiles() != null &&
!pm.getRollbackLogFiles().isEmpty();
boolean hasNonZeroRollbackLogFiles = hasRollbackLogFiles &&
pm.getRollbackLogFiles().values().stream().mapToLong(Long::longValue).sum() > 0;
- // If commit being rolled back has not been synced to metadata table yet
then there is no need to update metadata
+
+ // If instant-to-rollback has not been synced to metadata table yet then
there is no need to update metadata
+ // This can happen in two cases:
Review comment:
thinking back, this kind of complexity , we could avoid. We can just
sync the `instant-to-rollback` and then let the rollback actually reverse it,
instead?
##########
File path:
hudi-common/src/main/java/org/apache/hudi/metadata/HoodieBackedTableMetadata.java
##########
@@ -260,6 +265,20 @@ public synchronized void close() throws Exception {
logRecordScanner = null;
}
+ /**
+ * Return the timestamp of the latest synced instant.
+ */
+ @Override
+ public Option<String> getSyncedInstantTime() {
Review comment:
seems like same code as in the writer? any chance for code reuse?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]