Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/16578#discussion_r151597063
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -961,6 +961,15 @@ object SQLConf {
.booleanConf
.createWithDefault(true)
+ val NESTED_SCHEMA_PRUNING_ENABLED =
+ buildConf("spark.sql.nestedSchemaPruning.enabled")
+ .internal()
+ .doc("Prune nested fields from a logical relation's output which are
unnecessary in " +
+ "satisfying a query. This optimization allows columnar file format
readers to avoid " +
+ "reading unnecessary nested column data.")
+ .booleanConf
+ .createWithDefault(true)
--- End diff --
As I know, seems we don't have a config setting specified for all tests
previously. We have set a config for a whole test suite, but not for all tests.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]