minihippo commented on a change in pull request #3173:
URL: https://github.com/apache/hudi/pull/3173#discussion_r745638078
##########
File path:
hudi-spark-datasource/hudi-spark/src/main/scala/org/apache/hudi/MergeOnReadSnapshotRelation.scala
##########
@@ -146,7 +154,7 @@ class MergeOnReadSnapshotRelation(val sqlContext:
SQLContext,
rdd.asInstanceOf[RDD[Row]]
}
- def buildFileIndex(filters: Array[Filter]): List[HoodieMergeOnReadFileSplit]
= {
+ def buildFileIndex(filters: Array[Filter]):
List[List[HoodieMergeOnReadFileSplit]] = {
Review comment:
> may I know why this change?
For Spark, it can treat the hudi table with bucket index as a bucketed table
and then each task will read the files belonging to the same bucket. If
reading only one partition, a task will only read a file group. But if more
than one, to bucket table, each task will read the files which has the same
bucket id and from different partition. Thus each task will have more one file
split.
Treating as a bucketed table will accelerated query, especially to the
join/group by bucket index key query.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]