sujith71955 commented on a change in pull request #24903: [SPARK-28084][SQL] 
Resolving the partition column name based on the resolver in sql load command 
URL: https://github.com/apache/spark/pull/24903#discussion_r304029971
 
 

 ##########
 File path: 
sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveExternalCatalog.scala
 ##########
 @@ -879,7 +879,11 @@ private[spark] class HiveExternalCatalog(conf: SparkConf, 
hadoopConf: Configurat
       // columns. Here we Lowercase the column names before passing the 
partition spec to Hive
       // client, to satisfy Hive.
       // scalastyle:off caselocale
-      orderedPartitionSpec.put(colName.toLowerCase, partition(colName))
+      partition.keys.foreach { partColName =>
+        if (partColName.toLowerCase == colName.toLowerCase()) {
 
 Review comment:
   very valid question, the problem here is the partition spec  string wont 
match with the column name coming from the table metadata, so when we try to 
get the partition value detail from TablePartitionSpec based on table metadata 
column name  we get empty .this is why i am trying to get the partition value 
based on the column name specified in spec.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to