Github user gatorsmile commented on a diff in the pull request:
https://github.com/apache/spark/pull/22184#discussion_r212834477
--- Diff: docs/sql-programming-guide.md ---
@@ -1895,6 +1895,10 @@ working with timestamps in `pandas_udf`s to get the
best performance, see
- Since Spark 2.4, File listing for compute statistics is done in
parallel by default. This can be disabled by setting
`spark.sql.parallelFileListingInStatsComputation.enabled` to `False`.
- Since Spark 2.4, Metadata files (e.g. Parquet summary files) and
temporary files are not counted as data files when calculating table size
during Statistics computation.
+## Upgrading From Spark SQL 2.3.1 to 2.3.2 and above
+
+ - In version 2.3.1 and earlier, when reading from a Parquet table, Spark
always returns null for any column whose column names in Hive metastore schema
and Parquet schema are in different letter cases, no matter whether
`spark.sql.caseSensitive` is set to true or false. Since 2.3.2, when
`spark.sql.caseSensitive` is set to false, Spark does case insensitive column
name resolution between Hive metastore schema and Parquet schema, so even
column names are in different letter cases, Spark returns corresponding column
values. An exception is thrown if there is ambiguity, i.e. more than one
Parquet column is matched.
--- End diff --
@cloud-fan We need to keep the behaviors consistent no matter whether we
use Hive serde reader or our native parquet reader. In the PR
https://github.com/apache/spark/pull/22148, we already introduced a change for
hive table, if `spark.sql.hive.convertMetastoreParquet` is set to true, right?
For Spark native parquet tables that were created by us, this is a bug fix
because the previous work does not respect `spark.sql.caseSensitive`; for the
parquet tables created by Hive, the field resolution should be consistent no
matter whether it is using our reader or Hive parquet reader. To most of end
users, they do not know the difference between Hive serde reader and native
parquet reader
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]