Github user dongjoon-hyun commented on the issue:
https://github.com/apache/spark/pull/22262
Sorry, but I'm still feeling that this PR is losing focus. How about
mentioning what you do in this PR like the following?
```
Apache Spark doesn't create Hive table with duplicated fields in both
case-sensitive and
case-insensitive mode. However, if Spark creates ORC files in
case-sensitive mode first
and create Hive table on that location, it's created. In this situation,
field resolution should
fail in case-insensitive mode. Otherwise, we don't know which columns will
be returned or
filtered. Previously, SPARK-25132 fixed the same issue in Parquet.
Here is a simple example:
...
```
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]