jonvex commented on code in PR #10225:
URL: https://github.com/apache/hudi/pull/10225#discussion_r1423065162


##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieCDCFileIndex.scala:
##########
@@ -78,4 +79,8 @@ class HoodieCDCFileIndex (override val spark: SparkSession,
       new Path(fileGroupId.getPartitionPath, fileGroupId.getFileId).toString
     }.toArray
   }
+
+  override def getRequiredFilters: Seq[Filter] = {
+    Seq.empty

Review Comment:
   CDC queries return 4 cols: op, ts_ms, before, after. The incremental filters 
are on the field "_hoodie_commit_time" so we will get an exception if we try to 
filter on that. If you look at CDCRelation you can see that buildScan0 does not 
use filters. So from that info + checking with @linliu-code I think this is 
correct course of action



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to