[GitHub] [spark] viirya commented on a change in pull request #26751: [SPARK-30107][SQL] Expose nested schema pruning to all V2 sources

2019-12-11 Thread GitBox
viirya commented on a change in pull request #26751: [SPARK-30107][SQL] Expose 
nested schema pruning to all V2 sources
URL: https://github.com/apache/spark/pull/26751#discussion_r356911410
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/PushDownUtils.scala
 ##
 @@ -76,28 +78,46 @@ object PushDownUtils extends PredicateHelper {
* @return the created `ScanConfig`(since column pruning is the last step of 
operator pushdown),
* and new output attributes after column pruning.
*/
-  // TODO: nested column pruning.
   def pruneColumns(
   scanBuilder: ScanBuilder,
   relation: DataSourceV2Relation,
-  exprs: Seq[Expression]): (Scan, Seq[AttributeReference]) = {
+  projects: Seq[NamedExpression],
+  filters: Seq[Expression]): (Scan, Seq[AttributeReference]) = {
 scanBuilder match {
+  case r: SupportsPushDownRequiredColumns if 
SQLConf.get.nestedSchemaPruningEnabled =>
 
 Review comment:
   If we know scanBuilder does not support nested schema pruning, do we still 
need go this path?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #26751: [SPARK-30107][SQL] Expose nested schema pruning to all V2 sources

2019-12-11 Thread GitBox
viirya commented on a change in pull request #26751: [SPARK-30107][SQL] Expose 
nested schema pruning to all V2 sources
URL: https://github.com/apache/spark/pull/26751#discussion_r356910815
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/FileScanBuilder.scala
 ##
 @@ -27,15 +27,20 @@ abstract class FileScanBuilder(
 dataSchema: StructType) extends ScanBuilder with 
SupportsPushDownRequiredColumns {
   private val partitionSchema = fileIndex.partitionSchema
   private val isCaseSensitive = 
sparkSession.sessionState.conf.caseSensitiveAnalysis
+  protected val supportsNestedSchemaPruning: Boolean = false
   protected var requiredSchema = StructType(dataSchema.fields ++ 
partitionSchema.fields)
 
   override def pruneColumns(requiredSchema: StructType): Unit = {
+// [SPARK-30107] While the passed `requiredSchema` always have pruned 
nested columns, the actual
+// data schema of this scan is determined in `readDataSchema`. File 
formats that don't support
+// nested schema pruning, use `requiredSchema` as a reference and perform 
the pruning partially.
 this.requiredSchema = requiredSchema
   }
 
   protected def readDataSchema(): StructType = {
 val requiredNameSet = createRequiredNameSet()
-val fields = dataSchema.fields.filter { field =>
+val schema = if (supportsNestedSchemaPruning) requiredSchema else 
dataSchema
 
 Review comment:
   hmm, so what is the difference using `requiredSchema` or `dataSchema`? When 
using the old path without nested schema pruning, the passed `requiredSchema` 
is always the source of truth. When `supportsNestedSchemaPruning` is false, 
don't we also go the old path, but we turn to `dataSchema`?? 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #26751: [SPARK-30107][SQL] Expose nested schema pruning to all V2 sources

2019-12-11 Thread GitBox
viirya commented on a change in pull request #26751: [SPARK-30107][SQL] Expose 
nested schema pruning to all V2 sources
URL: https://github.com/apache/spark/pull/26751#discussion_r356910815
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/FileScanBuilder.scala
 ##
 @@ -27,15 +27,20 @@ abstract class FileScanBuilder(
 dataSchema: StructType) extends ScanBuilder with 
SupportsPushDownRequiredColumns {
   private val partitionSchema = fileIndex.partitionSchema
   private val isCaseSensitive = 
sparkSession.sessionState.conf.caseSensitiveAnalysis
+  protected val supportsNestedSchemaPruning: Boolean = false
   protected var requiredSchema = StructType(dataSchema.fields ++ 
partitionSchema.fields)
 
   override def pruneColumns(requiredSchema: StructType): Unit = {
+// [SPARK-30107] While the passed `requiredSchema` always have pruned 
nested columns, the actual
+// data schema of this scan is determined in `readDataSchema`. File 
formats that don't support
+// nested schema pruning, use `requiredSchema` as a reference and perform 
the pruning partially.
 this.requiredSchema = requiredSchema
   }
 
   protected def readDataSchema(): StructType = {
 val requiredNameSet = createRequiredNameSet()
-val fields = dataSchema.fields.filter { field =>
+val schema = if (supportsNestedSchemaPruning) requiredSchema else 
dataSchema
 
 Review comment:
   hmm, so what is the difference using `requiredSchema` or `dataSchema`? When 
using the old path without nested schema pruning, the passed `requiredSchema` 
is always the source of truth. When `supportsNestedSchemaPruning` is false, we 
turn to `dataSchema`??
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #26751: [SPARK-30107][SQL] Expose nested schema pruning to all V2 sources

2019-12-10 Thread GitBox
viirya commented on a change in pull request #26751: [SPARK-30107][SQL] Expose 
nested schema pruning to all V2 sources
URL: https://github.com/apache/spark/pull/26751#discussion_r356364276
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/FileScanBuilder.scala
 ##
 @@ -27,15 +27,20 @@ abstract class FileScanBuilder(
 dataSchema: StructType) extends ScanBuilder with 
SupportsPushDownRequiredColumns {
   private val partitionSchema = fileIndex.partitionSchema
   private val isCaseSensitive = 
sparkSession.sessionState.conf.caseSensitiveAnalysis
+  protected val supportsNestedSchemaPruning: Boolean = false
   protected var requiredSchema = StructType(dataSchema.fields ++ 
partitionSchema.fields)
 
   override def pruneColumns(requiredSchema: StructType): Unit = {
+// [SPARK-30107] While the passed `requiredSchema` always have pruned 
nested columns, the actual
+// data schema of this scan is determined in `readDataSchema`. File 
formats that don't support
+// nested schema pruning, use `requiredSchema` as a reference and perform 
the pruning partially.
 this.requiredSchema = requiredSchema
   }
 
   protected def readDataSchema(): StructType = {
 val requiredNameSet = createRequiredNameSet()
-val fields = dataSchema.fields.filter { field =>
+val schema = if (supportsNestedSchemaPruning) requiredSchema else 
dataSchema
 
 Review comment:
   If supportsNestedSchemaPruning is true but 
SQLConf.get.nestedSchemaPruningEnabled is disabled? We should use `dataSchema`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #26751: [SPARK-30107][SQL] Expose nested schema pruning to all V2 sources

2019-12-04 Thread GitBox
viirya commented on a change in pull request #26751: [SPARK-30107][SQL] Expose 
nested schema pruning to all V2 sources
URL: https://github.com/apache/spark/pull/26751#discussion_r354114582
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/parquet/ParquetScanBuilder.scala
 ##
 @@ -68,6 +68,10 @@ case class ParquetScanBuilder(
   // All filters that can be converted to Parquet are pushed down.
   override def pushedFilters(): Array[Filter] = pushedParquetFilters
 
+  override def pruneColumns(requiredSchema: StructType): Unit = {
+this.requiredSchema = requiredSchema
+  }
 
 Review comment:
   for a datasource, how does it know passed in requiredSchema is for normal 
column pruning or nested column pruning?
   
   I have this question because for a datasource that does not support nested 
column pruning, looks like when SQLConf.get.nestedSchemaPruningEnabled is true, 
a nested pruning required schema is still passed in. How does such datasource 
know to act?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] viirya commented on a change in pull request #26751: [SPARK-30107][SQL] Expose nested schema pruning to all V2 sources

2019-12-04 Thread GitBox
viirya commented on a change in pull request #26751: [SPARK-30107][SQL] Expose 
nested schema pruning to all V2 sources
URL: https://github.com/apache/spark/pull/26751#discussion_r354112143
 
 

 ##
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/v2/PushDownUtils.scala
 ##
 @@ -76,28 +78,48 @@ object PushDownUtils extends PredicateHelper {
* @return the created `ScanConfig`(since column pruning is the last step of 
operator pushdown),
* and new output attributes after column pruning.
*/
-  // TODO: nested column pruning.
   def pruneColumns(
   scanBuilder: ScanBuilder,
   relation: DataSourceV2Relation,
-  exprs: Seq[Expression]): (Scan, Seq[AttributeReference]) = {
+  projects: Seq[NamedExpression],
+  filters: Seq[Expression]): (Scan, Seq[AttributeReference]) = {
 scanBuilder match {
+  case r: SupportsPushDownRequiredColumns if 
SQLConf.get.nestedSchemaPruningEnabled =>
+val rootFields = SchemaPruning.identifyRootFields(projects, filters)
+val prunedSchema = if (rootFields.nonEmpty) {
 
 Review comment:
   there was a check in `prunePhysicalColumns`:
   
   ```
   if (requestedRootFields.exists { root: RootField => !root.derivedFromAtt }) {
...
   }
   ```
   
   Is it removed from this move?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org