Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/9226#issuecomment-150926923
Another issue I noticed while experimenting was that, if field names
containing uppercase letters in a Parquet file, Hive doesn't recognize it
either. First create a Parquet table from Spark SQL side and save it as a Hive
table:
```scala
val df = sqlContext.range(3).coalesce(1).selectExpr("id AS VaLuE")
df.write.mode("overwrite").saveAsTable("parq")
```
Although `SHOW TABLES` and `DESC parq` shows that Hive does recognize table
`parq`, `SELECT * FROM parq` returns only `NULL` because the field name
contains uppercase letters. Changing `VaLuE` to `value`, then everything is
fine.
Maybe we should lowercase all field names when persisting a DataFrame to
metastore in Hive compatible mode.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]