Github user mallman commented on a diff in the pull request:
https://github.com/apache/spark/pull/15998#discussion_r89680470
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala ---
@@ -922,6 +923,29 @@ private[spark] class HiveExternalCatalog(conf:
SparkConf, hadoopConf: Configurat
/**
* Returns the partition names from hive metastore for a given table in
a database.
*/
+ override def listPartitionNames(
+ db: String,
+ table: String,
+ partialSpec: Option[TablePartitionSpec] = None): Seq[String] =
withClient {
+ val actualPartColNames = getTable(db, table).partitionColumnNames
+ val clientPartitionNames =
+ client.getPartitionNames(db, table,
partialSpec.map(lowerCasePartitionSpec))
+
+ if (actualPartColNames.exists(partColName => partColName !=
partColName.toLowerCase)) {
+ clientPartitionNames.map { partName =>
+ val partSpec = PartitioningUtils.parsePathFragmentAsSeq(partName)
--- End diff --
I've run some tests to compare behavior between Hive and Spark in handling
gnarly partition column names, and I found some disparities. We've spent a
considerable amount of time wrangling with partition column name handling
recently, and I'm not sure what semantics we've decided on. To ensure the
behavior I'm seeing is what we're expecting, I want to describe a scenario I
ran.
In my test scenario, I created a table named `test` with the stock Hive
2.1.0 distribution. (I simply downloaded it from its download page and
initialized an empty Derby schema store.) The exact DDL I used to create this
table is as follows:
```create table test(a string) partitioned by (`P``Ðr t` int);```
When I do a `describe test` with `hive` it shows the column name as ``p`дr
t``. It appears to lowercase the P and the cyrillic Ð before storing the table
schema it in the metastore. I then run
```alter table test add partition(`P``Ðr t`=0);```
When I run `show partitions test` in `hive` it gives me ``p`дr t=0``.
Additionally, when I list the contents of the `test` table's base directory in
HDFS, the partition directory entry is
```/user/hive/warehouse/test/p`дr t=0```
If I drop the table, create it with `spark-sql` using the same DDL as
before and do a `describe test`, the partition column is given as ``P`Ðr t``.
Spark has preserved the case of the partition column name. If I then do
```alter table test add partition(`P``Ðr t`=0);```
in `spark-sql` and `show partitions test` I get ``P`Ðr t=0``. When I list
the directory contents in HDFS, I get
```/user/hive/warehouse/test/P`Ðr t=0```
The upshot is Hive is lowercasing the partition column name and Spark is
leaving it unaltered. Is this correct?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]