yihua commented on code in PR #11947:
URL: https://github.com/apache/hudi/pull/11947#discussion_r1797905109


##########
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/GcsEventsHoodieIncrSource.java:
##########
@@ -170,7 +170,8 @@ public Pair<Option<Dataset<Row>>, String> 
fetchNextBatch(Option<String> lastChec
     QueryInfo queryInfo = generateQueryInfo(
         sparkContext, srcPath, numInstantsPerFetch,
         Option.of(cloudObjectIncrCheckpoint.getCommit()),
-        missingCheckpointStrategy, handlingMode, 
HoodieRecord.COMMIT_TIME_METADATA_FIELD,
+        missingCheckpointStrategy, handlingMode,
+        HoodieRecord.COMMIT_TIME_METADATA_FIELD,

Review Comment:
   Revert the format change only.



##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/procedure/TestCopyToTableProcedure.scala:
##########
@@ -188,10 +191,11 @@ class TestCopyToTableProcedure extends 
HoodieSparkProcedureTestBase {
       spark.sql(s"insert into $tableName select 2, 'a2', 20, 1500")
 
       // mark beginTime
-      val beginTime = spark.sql(s"select max(_hoodie_commit_time) from 
$tableName").collectAsList().get(0).get(0)
+      val fs = HadoopFSUtils.getFs(tablePath, 
spark.sessionState.newHadoopConf())
+      val beginCompletionTime = 
HoodieDataSourceHelpers.latestCommitCompletionTime(fs, tablePath)
       spark.sql(s"insert into $tableName select 3, 'a3', 30, 2000")
       spark.sql(s"insert into $tableName select 4, 'a4', 40, 2500")
-      val endTime = spark.sql(s"select max(_hoodie_commit_time) from 
$tableName").collectAsList().get(0).get(0)
+      val endCompletionTime = 
HoodieDataSourceHelpers.latestCommitCompletionTime(fs, tablePath)

Review Comment:
   Why is `endCompletionTime` the same as `beginCompletionTime` in this test?



##########
hudi-spark-datasource/hudi-spark/src/test/scala/org/apache/spark/sql/hudi/dml/TestMergeIntoTable.scala:
##########
@@ -1002,13 +1003,14 @@ class TestMergeIntoTable extends HoodieSparkSqlTestBase 
with ScalaAssertionSuppo
         val hudiIncDF1 = spark.read.format("org.apache.hudi")
           .option(DataSourceReadOptions.QUERY_TYPE.key, 
DataSourceReadOptions.QUERY_TYPE_INCREMENTAL_OPT_VAL)
           .option(DataSourceReadOptions.BEGIN_INSTANTTIME.key, "000")
-          .option(DataSourceReadOptions.END_INSTANTTIME.key, firstCommitTime)
+          .option(DataSourceReadOptions.END_INSTANTTIME.key, 
firstCompletionTime)

Review Comment:
   Check `BEGIN_INSTANTTIME` to see if it needs update too.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to