yuzhaojing commented on code in PR #12642:
URL: https://github.com/apache/hudi/pull/12642#discussion_r1948429984


##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieBaseRelation.scala:
##########
@@ -350,6 +350,7 @@ abstract class HoodieBaseRelation(val sqlContext: 
SQLContext,
    * NOTE: DO NOT OVERRIDE THIS METHOD
    */
   override final def buildScan(requiredColumns: Array[String], filters: 
Array[Filter]): RDD[Row] = {
+    fileIndex.refresh()

Review Comment:
   I have reviewed the corresponding code of 
`HoodieMergeOnReadSnapshotHadoopFsRelationFactory`, and it appears that a 
refresh is still necessary at this point. This is essentially because the same 
Spark context reuses the relation when executing identical queries.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to