codope commented on code in PR #10018:
URL: https://github.com/apache/hudi/pull/10018#discussion_r1386418453


##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/HoodieFileGroupReaderBasedParquetFileFormat.scala:
##########
@@ -239,8 +239,9 @@ class 
HoodieFileGroupReaderBasedParquetFileFormat(tableState: HoodieTableState,
     //file reader for reading a hudi base file that needs to be merged with 
log files
     val preMergeBaseFileReader = if (isMOR) {
       // Add support for reading files using inline file system.
-      super.buildReaderWithPartitionValues(sparkSession, dataSchema, 
partitionSchema,
-        requiredSchemaWithMandatory, requiredFilters, options, new 
Configuration(hadoopConf))
+      super.buildReaderWithPartitionValues(sparkSession, dataSchema, 
partitionSchema, requiredSchemaWithMandatory,
+        if (shouldUseRecordPosition) requiredFilters else filters ++ 
requiredFilters,

Review Comment:
   We should further check for presence of record key fields in `filters` and 
if present, then only add record key fields. For other fields, there maybe some 
correctness issue.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to