hudi-agent commented on code in PR #18696:
URL: https://github.com/apache/hudi/pull/18696#discussion_r3198283346
##########
hudi-client/hudi-client-common/src/main/java/org/apache/hudi/client/BaseHoodieWriteClient.java:
##########
@@ -894,6 +862,63 @@ public void restoreToSavepoint(String savepointTime) {
SavepointHelpers.validateSavepointRestore(table, savepointTime);
}
+ /**
+ * Decides whether the metadata table (MDT) must be deleted before restoring
the data table to
+ * {@code targetInstant}. Returns true when restoring would leave the MDT in
an inconsistent
+ * state, specifically when any of the following holds:
+ * <ol>
+ * <li>The target is at or before the MDT's penultimate completed
compaction (when at least
+ * two compactions exist). The restore would otherwise succeed for the
data table but
+ * {@code finishRestore} would fail to sync rollbacks into an MDT with
no base file at or
+ * before the target time.</li>
+ * <li>The target is at or before the oldest completed compaction. We
cannot restore to before
+ * the oldest compaction because we don't have base files before that
time.</li>
+ * <li>The target is before the MDT timeline start (the relevant history
was archived away).</li>
+ * </ol>
+ * Returns false when the MDT directory does not exist (nothing to delete or
worry about).
+ * Wraps any IOException reading the MDT in a {@link HoodieException} so
genuine permission /
+ * network failures surface to the caller instead of being silently
swallowed.
+ */
+ protected boolean shouldDeleteMdtBeforeRestore(String targetInstant) {
+ String mdtBasePath = getMetadataTableBasePath(config.getBasePath());
+ try {
+ // Cheap existence check first to avoid constructing an MDT meta client
when there is no MDT.
+ if (!storage.exists(new StoragePath(mdtBasePath))) {
+ return false;
+ }
+ HoodieTableMetaClient mdtMetaClient = HoodieTableMetaClient.builder()
+ .setConf(storageConf.newInstance())
+ .setBasePath(mdtBasePath).build();
+ List<HoodieInstant> completedCompactions =
mdtMetaClient.getCommitTimeline()
+ .filterCompletedInstants().getInstants();
+ if (completedCompactions.size() >= 2) {
+ String penultimate =
completedCompactions.get(completedCompactions.size() - 2).requestedTime();
+ if (LESSER_THAN_OR_EQUALS.test(targetInstant, penultimate)) {
+ log.warn("Deleting MDT before restore to {}: target is at or before
penultimate MDT compaction {}",
+ targetInstant, penultimate);
+ return true;
+ }
+ }
+ Option<HoodieInstant> oldestMdtCompaction =
mdtMetaClient.getCommitTimeline()
Review Comment:
🤖 nit: `completedCompactions` already holds the full sorted list from the
`filterCompletedInstants()` call on line 892. Could you replace lines 902–903
with `completedCompactions.isEmpty() ? Option.empty() :
Option.of(completedCompactions.get(0))`? The second timeline traversal makes a
reader wonder whether the two calls might return different results.
<sub><i>- AI-generated; verify before applying. React 👍/👎 to flag
quality.</i></sub>
##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/procedures/RestoreToInstantProcedure.scala:
##########
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.hudi.command.procedures
+
+import org.apache.hudi.HoodieCLIUtils
+import org.apache.hudi.avro.model.HoodieRestoreMetadata
+import org.apache.hudi.client.SparkRDDWriteClient
+import org.apache.hudi.common.config.HoodieMetadataConfig
+import org.apache.hudi.common.fs.ConsistencyGuardConfig
+import org.apache.hudi.common.table.HoodieTableMetaClient
+import org.apache.hudi.common.table.timeline.HoodieInstant
+import org.apache.hudi.config.HoodieWriteConfig
+import org.apache.hudi.exception.HoodieException
+import org.apache.hudi.hadoop.fs.HadoopFSUtils
+import org.apache.hudi.storage.StoragePath
+
+import org.apache.hadoop.fs.Path
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.Row
+import org.apache.spark.sql.hudi.command.procedures.RestoreToInstantProcedure._
+import org.apache.spark.sql.types.{DataTypes, Metadata, StructField,
StructType}
+
+import java.util.function.Supplier
+
+import scala.collection.JavaConverters._
+
+/**
+ * Stored procedure to perform a full point-in-time table restore to a given
instant.
+ *
+ * Unlike [[RollbackToSavepointProcedure]] (which requires a savepoint at the
target instant),
+ * this procedure calls restoreToInstant() directly and works on any arbitrary
instant on the
+ * active timeline.
+ *
+ * Parameters:
+ * - table / path: identifies the Hudi table (one must be provided)
+ * - instant_time: target commit to restore to (required when
audit_only=false; must be omitted
+ * when audit_only=true)
+ * - restore_instant_time: the restore operation's own timeline timestamp
(the start_restore_time
+ * value returned by a prior restore_to_instant call).
Required when
+ * audit_only=true; must be omitted otherwise.
+ * - enable_metadata: whether the metadata table is enabled (default: true)
+ * - rollback_parallelism: Spark parallelism for rollback and audit
operations (default: 4)
+ * - enable_consistency_guard: enable consistency guard for file existence
checks (default: false)
+ * - audit_post_restore: after restoring, verify that all rolled-back files
are absent (default: false)
+ * - audit_only: skip the restore and only audit a previously completed
restore instant (default: false)
+ *
+ * Output columns:
+ * - restore_result: true if restore succeeded; null if audit_only=true
+ * - start_restore_time: the restore operation's own timeline timestamp;
null if audit_only=true.
+ * Save this value to use as restore_instant_time for
a subsequent audit_only call.
+ * - time_taken_in_millis: restore duration; null if audit_only=true
+ * - instants_rolled_back: number of commits rolled back; null if
audit_only=true
+ * - audit_result: one of "PASSED" / "FAILED" / "INCONCLUSIVE" when an audit
ran; null otherwise.
+ * INCONCLUSIVE means at least one file existence check
threw an IOException
+ * (e.g. transient cloud-storage timeout) — re-run
audit_only=true to retry.
+ */
+class RestoreToInstantProcedure extends BaseProcedure with ProcedureBuilder
with Logging {
+
+ private val PARAMETERS = Array[ProcedureParameter](
+ ProcedureParameter.optional(0, "table", DataTypes.StringType),
+ ProcedureParameter.optional(1, "instant_time", DataTypes.StringType),
+ ProcedureParameter.optional(2, "enable_metadata", DataTypes.BooleanType,
true),
+ ProcedureParameter.optional(3, "rollback_parallelism",
DataTypes.IntegerType, 4),
+ ProcedureParameter.optional(4, "enable_consistency_guard",
DataTypes.BooleanType, false),
+ ProcedureParameter.optional(5, "audit_post_restore",
DataTypes.BooleanType, false),
+ ProcedureParameter.optional(6, "audit_only", DataTypes.BooleanType, false),
+ ProcedureParameter.optional(7, "path", DataTypes.StringType),
+ ProcedureParameter.optional(8, "restore_instant_time",
DataTypes.StringType)
+ )
+
+ private val OUTPUT_TYPE = new StructType(Array[StructField](
+ StructField("restore_result", DataTypes.BooleanType, nullable = true,
Metadata.empty),
+ StructField("start_restore_time", DataTypes.StringType, nullable = true,
Metadata.empty),
+ StructField("time_taken_in_millis", DataTypes.LongType, nullable = true,
Metadata.empty),
+ StructField("instants_rolled_back", DataTypes.LongType, nullable = true,
Metadata.empty),
+ StructField("audit_result", DataTypes.StringType, nullable = true,
Metadata.empty)
+ ))
+
+ def parameters: Array[ProcedureParameter] = PARAMETERS
+
+ def outputType: StructType = OUTPUT_TYPE
+
+ override def call(args: ProcedureArgs): Seq[Row] = {
+ super.checkArgs(PARAMETERS, args)
+
+ val tableName = getArgValueOrDefault(args, PARAMETERS(0))
+ val instantTime = getArgValueOrDefault(args, PARAMETERS(1))
+ val enableMetadata = getArgValueOrDefault(args,
PARAMETERS(2)).get.asInstanceOf[Boolean]
+ val rollbackParallelism = getArgValueOrDefault(args,
PARAMETERS(3)).get.asInstanceOf[Int]
+ val enableConsistencyGuard = getArgValueOrDefault(args,
PARAMETERS(4)).get.asInstanceOf[Boolean]
+ val shouldAuditPostRestore = getArgValueOrDefault(args,
PARAMETERS(5)).get.asInstanceOf[Boolean]
+ val auditOnly = getArgValueOrDefault(args,
PARAMETERS(6)).get.asInstanceOf[Boolean]
+ val tablePath = getArgValueOrDefault(args, PARAMETERS(7))
+ val restoreInstantTime = getArgValueOrDefault(args, PARAMETERS(8))
+
+ // Cross-validation: each of (instant_time, restore_instant_time) has one
unambiguous meaning.
+ if (!auditOnly && instantTime.isEmpty) {
+ throw new HoodieException("instant_time is required when
audit_only=false.")
+ }
+ if (auditOnly && restoreInstantTime.isEmpty) {
+ throw new HoodieException(
+ "restore_instant_time is required when audit_only=true. " +
+ "Pass the start_restore_time value from a prior restore_to_instant
call.")
+ }
+ if (!auditOnly && restoreInstantTime.isDefined) {
+ throw new HoodieException("restore_instant_time may only be specified
when audit_only=true.")
+ }
+ if (auditOnly && instantTime.isDefined) {
+ throw new HoodieException(
+ "instant_time may only be specified when audit_only=false. " +
+ "Use restore_instant_time to identify a previously executed
restore.")
+ }
+ if (auditOnly && shouldAuditPostRestore) {
+ logWarning("Both audit_only and audit_post_restore are set. Only
audit_only will be honored.")
+ }
+
+ val basePath = getBasePath(tableName, tablePath)
+
+ val confs = Map(
+ HoodieMetadataConfig.ENABLE.key() -> enableMetadata.toString,
+ HoodieWriteConfig.ROLLBACK_PARALLELISM_VALUE.key() ->
rollbackParallelism.toString,
+ HoodieWriteConfig.ROLLBACK_USING_MARKERS_ENABLE.key() -> "false"
+ ) ++ (if (enableConsistencyGuard) Map(ConsistencyGuardConfig.ENABLE.key()
-> "true") else Map.empty)
+
+ val metaClient = createMetaClient(jsc, basePath)
+
+ // Nullable boxed types so Row can hold null for audit_only runs
+ var restoreResult: java.lang.Boolean = null
+ var startRestoreTime: String = null
+ var timeTakenInMillis: java.lang.Long = null
+ var instantsRolledBack: java.lang.Long = null
+
+ if (!auditOnly) {
+ val targetInstant = instantTime.get.asInstanceOf[String]
+ var client: SparkRDDWriteClient[_] = null
+ try {
+ client = HoodieCLIUtils.createHoodieWriteClient(sparkSession,
basePath, confs,
+ tableName.asInstanceOf[Option[String]])
+ // restoreToInstant either returns non-null HoodieRestoreMetadata or
throws HoodieRestoreException.
+ // Restoring to a target before the MDT's penultimate / oldest
compaction (or before the MDT
+ // timeline start) would otherwise leave the MDT inconsistent during
finishRestore;
+ // BaseHoodieWriteClient.restoreToInstant invokes the centralized
helper to pre-emptively
+ // delete the MDT in those cases.
+ val restoreMetadata = client.restoreToInstant(targetInstant,
enableMetadata)
+ restoreResult = true
+ startRestoreTime = restoreMetadata.getStartRestoreTime
+ timeTakenInMillis = restoreMetadata.getTimeTakenInMillis
+ instantsRolledBack =
restoreMetadata.getInstantsToRollback.size().toLong
+ } finally {
+ if (client != null) {
+ client.close()
+ }
+ }
+ if (tableName.isDefined) {
+ spark.catalog.refreshTable(tableName.get.asInstanceOf[String])
+ }
+ // getActiveTimeline is lazily cached; reload so the new .restore
instant is visible to the audit.
+ if (shouldAuditPostRestore) {
+ metaClient.reloadActiveTimeline()
+ }
+ }
+
+ var auditResult: String = null
+ if (auditOnly || shouldAuditPostRestore) {
+ val restoreInstant: HoodieInstant = if (auditOnly) {
+ val ts = restoreInstantTime.get.asInstanceOf[String]
+ val instants =
metaClient.getActiveTimeline.getRestoreTimeline.filterCompletedInstants
+ .getInstants.asScala
+ instants.find(_.requestedTime().equals(ts)).getOrElse(
+ throw new HoodieException(s"No completed restore instant found for
$ts. " +
+ "Pass the start_restore_time from a prior restore_to_instant call
as restore_instant_time.")
+ )
+ } else {
+ val lastOpt =
metaClient.getActiveTimeline.getRestoreTimeline.filterCompletedInstants.lastInstant
+ if (!lastOpt.isPresent) {
+ throw new HoodieException("No completed restore instant found on
timeline after restore.")
+ }
+ lastOpt.get()
+ }
+ auditResult = auditPostRestore(metaClient, basePath, restoreInstant,
rollbackParallelism)
+ }
+
+ Seq(Row(restoreResult, startRestoreTime, timeTakenInMillis,
instantsRolledBack, auditResult))
+ }
+
+ /**
+ * Verifies that all files expected to have been deleted by a restore
operation are actually
+ * absent from storage. Returns one of "PASSED", "FAILED", or "INCONCLUSIVE":
+ * - PASSED: every file expected to be absent is in fact absent.
+ * - FAILED: at least one file is still present after restore.
+ * - INCONCLUSIVE: no file was confirmed present, but at least one
existence check threw
+ * an IOException (e.g. transient cloud-storage timeout).
Re-run with
+ * audit_only=true to retry.
+ */
+ private def auditPostRestore(
+ metaClient: HoodieTableMetaClient,
+ basePath: String,
+ restoreInstant: HoodieInstant,
+ rollbackParallelism: Int): String = {
+ try {
+ val restoreMetadata: HoodieRestoreMetadata =
+ metaClient.getActiveTimeline.readRestoreMetadata(restoreInstant)
+
+ val filesToCheck = new java.util.ArrayList[String]()
+ restoreMetadata.getHoodieRestoreMetadata.values.asScala.foreach {
rollbackList =>
Review Comment:
🤖 Is including `getFailedDeleteFiles` in the audit intentional? These are
files the rollback explicitly failed to delete (so they're typically still
present), which means the audit will reliably report FAILED for them —
duplicating information already in the rollback metadata. If the goal is to
verify the rollback was effective, checking only `getSuccessDeleteFiles` (the
files the rollback claims it deleted) seems closer to the audit's stated intent
in the procedure's docstring ("verify that all rolled-back files are absent").
<sub><i>- AI-generated; verify before applying. React 👍/👎 to flag
quality.</i></sub>
##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/procedures/RestoreToInstantProcedure.scala:
##########
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.hudi.command.procedures
+
+import org.apache.hudi.HoodieCLIUtils
+import org.apache.hudi.avro.model.HoodieRestoreMetadata
+import org.apache.hudi.client.SparkRDDWriteClient
+import org.apache.hudi.common.config.HoodieMetadataConfig
+import org.apache.hudi.common.fs.ConsistencyGuardConfig
+import org.apache.hudi.common.table.HoodieTableMetaClient
+import org.apache.hudi.common.table.timeline.HoodieInstant
+import org.apache.hudi.config.HoodieWriteConfig
+import org.apache.hudi.exception.HoodieException
+import org.apache.hudi.hadoop.fs.HadoopFSUtils
+import org.apache.hudi.storage.StoragePath
+
+import org.apache.hadoop.fs.Path
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.Row
+import org.apache.spark.sql.hudi.command.procedures.RestoreToInstantProcedure._
+import org.apache.spark.sql.types.{DataTypes, Metadata, StructField,
StructType}
+
+import java.util.function.Supplier
+
+import scala.collection.JavaConverters._
+
+/**
+ * Stored procedure to perform a full point-in-time table restore to a given
instant.
+ *
+ * Unlike [[RollbackToSavepointProcedure]] (which requires a savepoint at the
target instant),
+ * this procedure calls restoreToInstant() directly and works on any arbitrary
instant on the
+ * active timeline.
+ *
+ * Parameters:
+ * - table / path: identifies the Hudi table (one must be provided)
+ * - instant_time: target commit to restore to (required when
audit_only=false; must be omitted
+ * when audit_only=true)
+ * - restore_instant_time: the restore operation's own timeline timestamp
(the start_restore_time
+ * value returned by a prior restore_to_instant call).
Required when
+ * audit_only=true; must be omitted otherwise.
+ * - enable_metadata: whether the metadata table is enabled (default: true)
+ * - rollback_parallelism: Spark parallelism for rollback and audit
operations (default: 4)
+ * - enable_consistency_guard: enable consistency guard for file existence
checks (default: false)
+ * - audit_post_restore: after restoring, verify that all rolled-back files
are absent (default: false)
+ * - audit_only: skip the restore and only audit a previously completed
restore instant (default: false)
+ *
+ * Output columns:
+ * - restore_result: true if restore succeeded; null if audit_only=true
+ * - start_restore_time: the restore operation's own timeline timestamp;
null if audit_only=true.
+ * Save this value to use as restore_instant_time for
a subsequent audit_only call.
+ * - time_taken_in_millis: restore duration; null if audit_only=true
+ * - instants_rolled_back: number of commits rolled back; null if
audit_only=true
+ * - audit_result: one of "PASSED" / "FAILED" / "INCONCLUSIVE" when an audit
ran; null otherwise.
+ * INCONCLUSIVE means at least one file existence check
threw an IOException
+ * (e.g. transient cloud-storage timeout) — re-run
audit_only=true to retry.
+ */
+class RestoreToInstantProcedure extends BaseProcedure with ProcedureBuilder
with Logging {
+
+ private val PARAMETERS = Array[ProcedureParameter](
+ ProcedureParameter.optional(0, "table", DataTypes.StringType),
+ ProcedureParameter.optional(1, "instant_time", DataTypes.StringType),
+ ProcedureParameter.optional(2, "enable_metadata", DataTypes.BooleanType,
true),
+ ProcedureParameter.optional(3, "rollback_parallelism",
DataTypes.IntegerType, 4),
+ ProcedureParameter.optional(4, "enable_consistency_guard",
DataTypes.BooleanType, false),
+ ProcedureParameter.optional(5, "audit_post_restore",
DataTypes.BooleanType, false),
+ ProcedureParameter.optional(6, "audit_only", DataTypes.BooleanType, false),
+ ProcedureParameter.optional(7, "path", DataTypes.StringType),
+ ProcedureParameter.optional(8, "restore_instant_time",
DataTypes.StringType)
+ )
+
+ private val OUTPUT_TYPE = new StructType(Array[StructField](
+ StructField("restore_result", DataTypes.BooleanType, nullable = true,
Metadata.empty),
+ StructField("start_restore_time", DataTypes.StringType, nullable = true,
Metadata.empty),
+ StructField("time_taken_in_millis", DataTypes.LongType, nullable = true,
Metadata.empty),
+ StructField("instants_rolled_back", DataTypes.LongType, nullable = true,
Metadata.empty),
+ StructField("audit_result", DataTypes.StringType, nullable = true,
Metadata.empty)
+ ))
+
+ def parameters: Array[ProcedureParameter] = PARAMETERS
+
+ def outputType: StructType = OUTPUT_TYPE
+
+ override def call(args: ProcedureArgs): Seq[Row] = {
+ super.checkArgs(PARAMETERS, args)
+
+ val tableName = getArgValueOrDefault(args, PARAMETERS(0))
+ val instantTime = getArgValueOrDefault(args, PARAMETERS(1))
+ val enableMetadata = getArgValueOrDefault(args,
PARAMETERS(2)).get.asInstanceOf[Boolean]
+ val rollbackParallelism = getArgValueOrDefault(args,
PARAMETERS(3)).get.asInstanceOf[Int]
+ val enableConsistencyGuard = getArgValueOrDefault(args,
PARAMETERS(4)).get.asInstanceOf[Boolean]
+ val shouldAuditPostRestore = getArgValueOrDefault(args,
PARAMETERS(5)).get.asInstanceOf[Boolean]
+ val auditOnly = getArgValueOrDefault(args,
PARAMETERS(6)).get.asInstanceOf[Boolean]
+ val tablePath = getArgValueOrDefault(args, PARAMETERS(7))
+ val restoreInstantTime = getArgValueOrDefault(args, PARAMETERS(8))
+
+ // Cross-validation: each of (instant_time, restore_instant_time) has one
unambiguous meaning.
+ if (!auditOnly && instantTime.isEmpty) {
+ throw new HoodieException("instant_time is required when
audit_only=false.")
+ }
+ if (auditOnly && restoreInstantTime.isEmpty) {
+ throw new HoodieException(
+ "restore_instant_time is required when audit_only=true. " +
+ "Pass the start_restore_time value from a prior restore_to_instant
call.")
+ }
+ if (!auditOnly && restoreInstantTime.isDefined) {
+ throw new HoodieException("restore_instant_time may only be specified
when audit_only=true.")
+ }
+ if (auditOnly && instantTime.isDefined) {
+ throw new HoodieException(
+ "instant_time may only be specified when audit_only=false. " +
+ "Use restore_instant_time to identify a previously executed
restore.")
+ }
+ if (auditOnly && shouldAuditPostRestore) {
+ logWarning("Both audit_only and audit_post_restore are set. Only
audit_only will be honored.")
+ }
+
+ val basePath = getBasePath(tableName, tablePath)
+
+ val confs = Map(
+ HoodieMetadataConfig.ENABLE.key() -> enableMetadata.toString,
+ HoodieWriteConfig.ROLLBACK_PARALLELISM_VALUE.key() ->
rollbackParallelism.toString,
+ HoodieWriteConfig.ROLLBACK_USING_MARKERS_ENABLE.key() -> "false"
+ ) ++ (if (enableConsistencyGuard) Map(ConsistencyGuardConfig.ENABLE.key()
-> "true") else Map.empty)
+
+ val metaClient = createMetaClient(jsc, basePath)
+
+ // Nullable boxed types so Row can hold null for audit_only runs
+ var restoreResult: java.lang.Boolean = null
+ var startRestoreTime: String = null
+ var timeTakenInMillis: java.lang.Long = null
+ var instantsRolledBack: java.lang.Long = null
+
+ if (!auditOnly) {
+ val targetInstant = instantTime.get.asInstanceOf[String]
+ var client: SparkRDDWriteClient[_] = null
+ try {
+ client = HoodieCLIUtils.createHoodieWriteClient(sparkSession,
basePath, confs,
+ tableName.asInstanceOf[Option[String]])
+ // restoreToInstant either returns non-null HoodieRestoreMetadata or
throws HoodieRestoreException.
+ // Restoring to a target before the MDT's penultimate / oldest
compaction (or before the MDT
+ // timeline start) would otherwise leave the MDT inconsistent during
finishRestore;
+ // BaseHoodieWriteClient.restoreToInstant invokes the centralized
helper to pre-emptively
+ // delete the MDT in those cases.
+ val restoreMetadata = client.restoreToInstant(targetInstant,
enableMetadata)
Review Comment:
🤖 Could you use the `startRestoreTime` captured from
`restoreMetadata.getStartRestoreTime` here instead of relying on `lastInstant`?
You already have the exact restore timestamp in scope, and looking it up
directly would be more deterministic — `lastInstant` would point at the wrong
restore if another restore landed on the timeline between this restore
committing and the timeline reload (rare in practice given Hudi's locking, but
the explicit lookup matches what the `audit_only` branch already does and
removes the assumption entirely).
<sub><i>- AI-generated; verify before applying. React 👍/👎 to flag
quality.</i></sub>
##########
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/spark/sql/hudi/command/procedures/RestoreToInstantProcedure.scala:
##########
@@ -0,0 +1,303 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.spark.sql.hudi.command.procedures
+
+import org.apache.hudi.HoodieCLIUtils
+import org.apache.hudi.avro.model.HoodieRestoreMetadata
+import org.apache.hudi.client.SparkRDDWriteClient
+import org.apache.hudi.common.config.HoodieMetadataConfig
+import org.apache.hudi.common.fs.ConsistencyGuardConfig
+import org.apache.hudi.common.table.HoodieTableMetaClient
+import org.apache.hudi.common.table.timeline.HoodieInstant
+import org.apache.hudi.config.HoodieWriteConfig
+import org.apache.hudi.exception.HoodieException
+import org.apache.hudi.hadoop.fs.HadoopFSUtils
+import org.apache.hudi.storage.StoragePath
+
+import org.apache.hadoop.fs.Path
+import org.apache.spark.internal.Logging
+import org.apache.spark.sql.Row
+import org.apache.spark.sql.hudi.command.procedures.RestoreToInstantProcedure._
+import org.apache.spark.sql.types.{DataTypes, Metadata, StructField,
StructType}
+
+import java.util.function.Supplier
+
+import scala.collection.JavaConverters._
+
+/**
+ * Stored procedure to perform a full point-in-time table restore to a given
instant.
+ *
+ * Unlike [[RollbackToSavepointProcedure]] (which requires a savepoint at the
target instant),
+ * this procedure calls restoreToInstant() directly and works on any arbitrary
instant on the
+ * active timeline.
+ *
+ * Parameters:
+ * - table / path: identifies the Hudi table (one must be provided)
+ * - instant_time: target commit to restore to (required when
audit_only=false; must be omitted
+ * when audit_only=true)
+ * - restore_instant_time: the restore operation's own timeline timestamp
(the start_restore_time
+ * value returned by a prior restore_to_instant call).
Required when
+ * audit_only=true; must be omitted otherwise.
+ * - enable_metadata: whether the metadata table is enabled (default: true)
+ * - rollback_parallelism: Spark parallelism for rollback and audit
operations (default: 4)
+ * - enable_consistency_guard: enable consistency guard for file existence
checks (default: false)
+ * - audit_post_restore: after restoring, verify that all rolled-back files
are absent (default: false)
+ * - audit_only: skip the restore and only audit a previously completed
restore instant (default: false)
+ *
+ * Output columns:
+ * - restore_result: true if restore succeeded; null if audit_only=true
+ * - start_restore_time: the restore operation's own timeline timestamp;
null if audit_only=true.
+ * Save this value to use as restore_instant_time for
a subsequent audit_only call.
+ * - time_taken_in_millis: restore duration; null if audit_only=true
+ * - instants_rolled_back: number of commits rolled back; null if
audit_only=true
+ * - audit_result: one of "PASSED" / "FAILED" / "INCONCLUSIVE" when an audit
ran; null otherwise.
+ * INCONCLUSIVE means at least one file existence check
threw an IOException
+ * (e.g. transient cloud-storage timeout) — re-run
audit_only=true to retry.
+ */
+class RestoreToInstantProcedure extends BaseProcedure with ProcedureBuilder
with Logging {
+
+ private val PARAMETERS = Array[ProcedureParameter](
+ ProcedureParameter.optional(0, "table", DataTypes.StringType),
+ ProcedureParameter.optional(1, "instant_time", DataTypes.StringType),
+ ProcedureParameter.optional(2, "enable_metadata", DataTypes.BooleanType,
true),
+ ProcedureParameter.optional(3, "rollback_parallelism",
DataTypes.IntegerType, 4),
+ ProcedureParameter.optional(4, "enable_consistency_guard",
DataTypes.BooleanType, false),
+ ProcedureParameter.optional(5, "audit_post_restore",
DataTypes.BooleanType, false),
+ ProcedureParameter.optional(6, "audit_only", DataTypes.BooleanType, false),
+ ProcedureParameter.optional(7, "path", DataTypes.StringType),
+ ProcedureParameter.optional(8, "restore_instant_time",
DataTypes.StringType)
Review Comment:
🤖 nit: the input parameter is `restore_instant_time` but the output column
that feeds it is `start_restore_time` (line 88), so a caller has to read the
Javadoc to know to pipe one into the other. Have you considered renaming the
input to `start_restore_time` so the round-trip is self-evident: capture
`start_restore_time` from step 1, pass it as `start_restore_time` in the audit
call?
<sub><i>- AI-generated; verify before applying. React 👍/👎 to flag
quality.</i></sub>
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]