vrozov commented on code in PR #52099:
URL: https://github.com/apache/spark/pull/52099#discussion_r2543186018


##########
sql/hive/src/test/scala/org/apache/spark/sql/hive/client/HiveVersionSuite.scala:
##########
@@ -35,9 +35,8 @@ private[client] abstract class HiveVersionSuite(version: 
String) extends SparkFu
     hadoopConf.set("datanucleus.autoStartMechanismMode", "ignored")
     hadoopConf.set("hive.metastore.schema.verification", "false")
     // Since Hive 3.0, HIVE-19310 skipped `ensureDbInit` if 
`hive.in.test=false`.
-    if (version == "3.0" || version == "3.1" || version == "4.0") {
+    if (version == "3.0" || version == "3.1" || version == "4.1") {
       hadoopConf.set("hive.in.test", "true")
-      hadoopConf.set("hive.query.reexecution.enabled", "false")

Review Comment:
   It was moved to `HiveClientImpl.scala`. It is required not only for the 
tests.



##########
sql/hive/src/main/scala/org/apache/spark/sql/hive/client/HiveClientImpl.scala:
##########
@@ -1224,8 +1204,9 @@ private[hive] object HiveClientImpl extends Logging {
       p: CatalogTablePartition,
       ht: HiveTable): HivePartition = {
     val tpart = new org.apache.hadoop.hive.metastore.api.Partition
+    val spec = new CaseInsensitiveStringMap(p.spec.asJava).asScala.view
     val partValues = ht.getPartCols.asScala.map { hc =>
-      p.spec.getOrElse(hc.getName, throw new IllegalArgumentException(
+      spec.getOrElse(hc.getName, throw new IllegalArgumentException(

Review Comment:
   While Hive was always case insensitive, it does not preserve the case 
anymore.



##########
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala:
##########
@@ -226,13 +227,10 @@ class HadoopTableReader(
           case (key, value) => props.setProperty(key, value)
         }
         DeserializerLock.synchronized {
-          deserializer.initialize(hconf, props)
+          deserializer.initialize(hconf, props, partProps)

Review Comment:
   The third parameter is used only for partitioned tables.



##########
sql/hive/src/main/scala/org/apache/spark/sql/hive/execution/HiveScriptTransformationExec.scala:
##########
@@ -284,7 +288,7 @@ object HiveScriptIOSchema extends HiveInspectors {
 
     val properties = new Properties()
     properties.putAll(propsMap.asJava)
-    serde.initialize(null, properties)
+    serde.initialize(hadoopConf, properties, null)

Review Comment:
   @sarutak ? Some serdes require `configuration`, for example `AvroSerde`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to