Github user mallman commented on the issue:
https://github.com/apache/spark/pull/14690
I would be wary of amending our data sources to support case-insensitive
field resolution. For one thing, strictly speaking it can lead to ambiguity in
schema resolution. In theâpotential but unlikelyâevent that a
(case-sensitive) data source schema has two distinct fields `x1` and `x2` such
that `x1.toLowerCase == x2.toLowerCase` we're going to get undefined behavior.
For another, for case-sensitive data sources this adds code complexity in
their implementation.
Finally, this would require us to read the schema files. That's something
I'm trying to avoid in this patch.
Personally, I'm more in favor of putting support for mixed-case
parquet-backed Hive metastore tables behind a "compatibility" flag. Setting
this flag to "true" would do on-disk/metastore schema reconciliation. Setting
this flag to "false" would omit that but support file schema with lowercase
field names only. Making that assumption would significantly enhance
performance.
FWIW, as a proof of concept I extended this patch on our private clone to
perform schema reconciliation strictly from the pruned partitions for Hive
metastore tables. That could form the basis for a "compatibility" mode. With
this additional code, all of the failing unit tests passed except one. The
failing one is "SPARK-15248: explicitly added partitions should be readable"
from the `ParquetMetastoreSuite`. I didn't spend any time debugging that, but's
it a test I've had to deal with before for another PR.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]