CTTY commented on code in PR #11947:
URL: https://github.com/apache/hudi/pull/11947#discussion_r1797786059


##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieMergeOnReadRDD.scala:
##########
@@ -82,7 +82,8 @@ class HoodieMergeOnReadRDD(@transient sc: SparkContext,
                            @transient fileSplits: 
Seq[HoodieMergeOnReadFileSplit],
                            includeStartTime: Boolean = false,
                            startTimestamp: String = null,
-                           endTimestamp: String = null)
+                           endTimestamp: String = null,
+                           includedTimestamps: Set[String] = null)

Review Comment:
   Yes, these are still using instant times. Because they are used to filter 
records based on `HoodieRecord.COMMIT_TIME_METADATA_FIELD` in each record.
   
   I'll fix the scaladoc.
   
   Also, I think we can remove `startTimestamp` and `endTimestamp` since they 
are not used anymore, will file a jira to track this 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to