dbtsai commented on a change in pull request #26751: [SPARK-30107][SQL] Expose
nested schema pruning to all V2 sources
URL: https://github.com/apache/spark/pull/26751#discussion_r356771927
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/FileScanBuilder.scala
##########
@@ -27,15 +27,20 @@ abstract class FileScanBuilder(
dataSchema: StructType) extends ScanBuilder with
SupportsPushDownRequiredColumns {
private val partitionSchema = fileIndex.partitionSchema
private val isCaseSensitive =
sparkSession.sessionState.conf.caseSensitiveAnalysis
+ protected val supportsNestedSchemaPruning: Boolean = false
protected var requiredSchema = StructType(dataSchema.fields ++
partitionSchema.fields)
override def pruneColumns(requiredSchema: StructType): Unit = {
+ // [SPARK-30107] While the passed `requiredSchema` always have pruned
nested columns, the actual
+ // data schema of this scan is determined in `readDataSchema`. File
formats that don't support
+ // nested schema pruning, use `requiredSchema` as a reference and perform
the pruning partially.
this.requiredSchema = requiredSchema
Review comment:
It's not in the scope of this PR, but I feel we could always pass the pruned
`requiredSchema` to the readers even for those not supporting any schema
pruning. Thus, the benefit will be 1) move less data into Spark even the
readers still require read the full data 2) The code will be consistent between
handling nested schema pruning and top level pruning.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
With regards,
Apache Git Services
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]