nsivabalan commented on a change in pull request #4605:
URL: https://github.com/apache/hudi/pull/4605#discussion_r789061424
##########
File path:
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/AbstractHoodieWriteClient.java
##########
@@ -674,16 +675,21 @@ public HoodieRestoreMetadata restoreToInstant(final
String instantTime) throws H
Timer.Context timerContext = metrics.getRollbackCtx();
try {
HoodieTable<T, I, K, O> table = createTable(config, hadoopConf,
config.isMetadataTableEnabled());
- HoodieRestoreMetadata restoreMetadata = table.restore(context,
restoreInstantTime, instantTime);
- if (timerContext != null) {
- final long durationInMs = metrics.getDurationInMs(timerContext.stop());
- final long totalFilesDeleted =
restoreMetadata.getHoodieRestoreMetadata().values().stream()
- .flatMap(Collection::stream)
- .mapToLong(HoodieRollbackMetadata::getTotalFilesDeleted)
- .sum();
- metrics.updateRollbackMetrics(durationInMs, totalFilesDeleted);
+ Option<HoodieRestorePlan> restorePlanOption =
table.scheduleRestore(context, restoreInstantTime, instantTime);
Review comment:
nope. we don't guarantee atomicity. In general, restore is a destructive
operation and its recommended to stop all writers before doing a restore for
any given hudi table. Even queries might fail if reading into files which
restore might clean it up, since restore will do eager cleaning of files from
latest commits.
Given this state, can you go over your question again? Concurrent restores
is not advisable and might end up in an inconsistent state. So, are you asking
about two subsequent restores?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]