nsivabalan commented on code in PR #5430:
URL: https://github.com/apache/hudi/pull/5430#discussion_r924360103


##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieBaseRelation.scala:
##########
@@ -289,19 +347,19 @@ abstract class HoodieBaseRelation(val sqlContext: 
SQLContext,
     //       the partition path, and omitted from the data file, back into 
fetched rows;
     //       Note that, by default, partition columns are not omitted 
therefore specifying
     //       partition schema for reader is not required
-    val (partitionSchema, dataSchema, prunedRequiredSchema) =
+    val (partitionSchema, dataSchema, requiredDataSchema) =
       tryPrunePartitionColumns(tableSchema, requiredSchema)
 
     if (fileSplits.isEmpty) {
       sparkSession.sparkContext.emptyRDD
     } else {
-      val rdd = composeRDD(fileSplits, partitionSchema, dataSchema, 
prunedRequiredSchema, filters)
+      val rdd = composeRDD(fileSplits, partitionSchema, dataSchema, 
requiredDataSchema, requiredColumns, filters)

Review Comment:
   last but one arg, shouldn't it refer to targetColumns? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to