harsh1231 commented on a change in pull request #4755:
URL: https://github.com/apache/hudi/pull/4755#discussion_r800485408



##########
File path: 
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/S3EventsHoodieIncrSource.java
##########
@@ -101,24 +101,31 @@ public S3EventsHoodieIncrSource(
             ? lastCkptStr.get().isEmpty() ? Option.empty() : lastCkptStr
             : Option.empty();
 
-    Pair<String, String> instantEndpts =
+    Pair<String, Pair<String, String>> queryTypeAndInstantEndpts =
         IncrSourceHelper.calculateBeginAndEndInstants(
             sparkContext, srcPath, numInstantsPerFetch, beginInstant, 
missingCheckpointStrategy);
 
-    if (instantEndpts.getKey().equals(instantEndpts.getValue())) {
-      LOG.warn("Already caught up. Begin Checkpoint was :" + 
instantEndpts.getKey());
-      return Pair.of(Option.empty(), instantEndpts.getKey());
+    if 
(queryTypeAndInstantEndpts.getValue().getKey().equals(queryTypeAndInstantEndpts.getValue().getValue()))
 {
+      LOG.warn("Already caught up. Begin Checkpoint was :" + 
queryTypeAndInstantEndpts.getKey());
+      return Pair.of(Option.empty(), queryTypeAndInstantEndpts.getKey());
     }
 
+    DataFrameReader metaReader = null;
     // Do incremental pull. Set end instant if available.
-    DataFrameReader metaReader = sparkSession.read().format("org.apache.hudi")
-        .option(DataSourceReadOptions.QUERY_TYPE().key(), 
DataSourceReadOptions.QUERY_TYPE_INCREMENTAL_OPT_VAL())
-        .option(DataSourceReadOptions.BEGIN_INSTANTTIME().key(), 
instantEndpts.getLeft())
-        .option(DataSourceReadOptions.END_INSTANTTIME().key(), 
instantEndpts.getRight());
+    if 
(queryTypeAndInstantEndpts.getKey().equals(DataSourceReadOptions.QUERY_TYPE_INCREMENTAL_OPT_VAL()))
 {
+      metaReader = sparkSession.read().format("org.apache.hudi")
+          .option(DataSourceReadOptions.QUERY_TYPE().key(), 
DataSourceReadOptions.QUERY_TYPE_INCREMENTAL_OPT_VAL())
+          .option(DataSourceReadOptions.BEGIN_INSTANTTIME().key(), 
queryTypeAndInstantEndpts.getRight().getLeft())
+          .option(DataSourceReadOptions.END_INSTANTTIME().key(), 
queryTypeAndInstantEndpts.getRight().getRight());
+    } else {
+      // if checkpoint is missing from source table, and if strategy is set to 
READ_UPTO_LATEST_COMMIT, we have to issue snapshot query
+      metaReader = sparkSession.read().format("org.apache.hudi")
+          .option(DataSourceReadOptions.QUERY_TYPE().key(), 
DataSourceReadOptions.QUERY_TYPE_SNAPSHOT_OPT_VAL());

Review comment:
       @nsivabalan how can we support filtering data which is consumed till 
last checkpoint for snapshot query ? 

##########
File path: 
hudi-utilities/src/main/java/org/apache/hudi/utilities/sources/helpers/IncrSourceHelper.java
##########
@@ -88,15 +89,15 @@ private static String getStrictlyLowerTimestamp(String 
timestamp) {
       }
     });
 
-    if (!beginInstantTime.equals(DEFAULT_BEGIN_TIMESTAMP)) {
+    if (missingCheckpointStrategy == MissingCheckpointStrategy.READ_LATEST) {

Review comment:
       if (checkpoint is not present) or  (checkpoint is present but checkpoint 
commit is not present in active timeline ) {
      SNAPSHOT_QUERY
   } 
   I think this logic is missing here




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to