xushiyan commented on code in PR #5708:
URL: https://github.com/apache/hudi/pull/5708#discussion_r927098140
##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieBaseRelation.scala:
##########
@@ -564,42 +538,57 @@ abstract class HoodieBaseRelation(val sqlContext:
SQLContext,
// we have to eagerly initialize all of the readers even though only
one specific to the type
// of the file being read will be used. This is required to avoid
serialization of the whole
// relation (containing file-index for ex) and passing it to the
executor
- val reader = tableBaseFileFormat match {
- case HoodieFileFormat.PARQUET =>
- HoodieDataSourceHelper.buildHoodieParquetReader(
- sparkSession = spark,
- dataSchema = dataSchema.structTypeSchema,
- partitionSchema = partitionSchema,
- requiredSchema = requiredSchema.structTypeSchema,
- filters = filters,
- options = options,
- hadoopConf = hadoopConf,
- // We're delegating to Spark to append partition values to every row
only in cases
- // when these corresponding partition-values are not persisted w/in
the data file itself
- appendPartitionValues = shouldExtractPartitionValuesFromPartitionPath
- )
+ val (read: (PartitionedFile => Iterator[InternalRow]), schema: StructType)
=
+ tableBaseFileFormat match {
+ case HoodieFileFormat.PARQUET =>
+ (
+ HoodieDataSourceHelper.buildHoodieParquetReader(
+ sparkSession = spark,
+ dataSchema = dataSchema.structTypeSchema,
+ partitionSchema = partitionSchema,
+ requiredSchema = requiredSchema.structTypeSchema,
+ filters = filters,
+ options = options,
+ hadoopConf = hadoopConf,
+ // We're delegating to Spark to append partition values to every
row only in cases
+ // when these corresponding partition-values are not persisted
w/in the data file itself
+ appendPartitionValues =
shouldExtractPartitionValuesFromPartitionPath
+ ),
+ // Since partition values by default are omitted, and not
persisted w/in data-files by Spark,
+ // data-file readers (such as [[ParquetFileFormat]]) have to
inject partition values while reading
+ // the data. As such, actual full schema produced by such reader
is composed of
+ // a) Prepended partition column values
+ // b) Data-file schema (projected or not)
+ StructType(partitionSchema.fields ++
requiredSchema.structTypeSchema.fields)
Review Comment:
why prepend not append? curious to know about the considerations
##########
hudi-spark-datasource/hudi-spark-common/src/main/scala/org/apache/hudi/HoodieMergeOnReadRDD.scala:
##########
@@ -129,10 +156,13 @@ class HoodieMergeOnReadRDD(@transient sc: SparkContext,
// a) It does use one of the standard (and whitelisted) Record
Payload classes
// then we can avoid reading and parsing the records w/ _full_
schema, and instead only
// rely on projected one, nevertheless being able to perform merging
correctly
- if (!whitelistedPayloadClasses.contains(tableState.recordPayloadClassName))
- (fileReaders.fullSchemaFileReader(split.dataFile.get), dataSchema)
- else
- (fileReaders.requiredSchemaFileReaderForMerging(split.dataFile.get),
requiredSchema)
+ val reader = if
(!whitelistedPayloadClasses.contains(tableState.recordPayloadClassName)) {
Review Comment:
/nit i'd prefer if() without negation
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]