jonvex commented on code in PR #11770:
URL: https://github.com/apache/hudi/pull/11770#discussion_r1722132467
##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/HoodieFileGroupReaderBasedParquetFileFormat.scala:
##########
@@ -154,18 +183,22 @@ class
HoodieFileGroupReaderBasedParquetFileFormat(tableState: HoodieTableState,
// Append partition values to rows and project to output schema
appendPartitionAndProject(
reader.getClosableIterator,
- requiredSchema,
- partitionSchema,
+ requestedSchema,
+ remainingPartitionSchema,
Review Comment:
We need to investigate if DPP with timestamp keygen columns will work/ or
cause faulty results. I'll discuss more with you.
**Answer to your question:**
No, that stuff happens at the relation level. This is one of the reasons why
we moved all the logic into the file format. We use HadoopFsRelation instead of
custom relations, so that spark will apply all optimizations that it does for
regular parquet tables.
HadoopFsRelation is constructed with these fields:
```
case class HadoopFsRelation(
location: FileIndex,
partitionSchema: StructType,
// The top-level columns in `dataSchema` should match the actual
physical file schema, otherwise
// the ORC data source may not work with the by-ordinal mode.
dataSchema: StructType,
bucketSpec: Option[BucketSpec],
fileFormat: FileFormat,
options: Map[String, String])(val sparkSession: SparkSession)
extends BaseRelation with FileRelation {
```
If you take a look at BaseFileOnlyRelation, we set the partition schema as
empty unless extract partition cols from path is enabled which is why DPP
wasn't happening previously.
https://github.com/apache/hudi/blob/db5c2d97dc94122ebd63e6200858eabc4b119178/hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/BaseFileOnlyRelation.scala#L154
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]