MaxGekk commented on a change in pull request #31066:
URL: https://github.com/apache/spark/pull/31066#discussion_r556498503



##########
File path: 
sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveSchemaInferenceSuite.scala
##########
@@ -118,18 +118,19 @@ class HiveSchemaInferenceSuite
         properties = Map.empty),
       true)
 
-    // Add partition records (if specified)
-    if (!partitionCols.isEmpty) {
-      spark.catalog.recoverPartitions(TEST_TABLE_NAME)
-    }
-
     // Check that the table returned by HiveExternalCatalog has 
schemaPreservesCase set to false
     // and that the raw table returned by the Hive client doesn't have any 
Spark SQL properties
     // set (table needs to be obtained from client since HiveExternalCatalog 
filters these
     // properties out).
     assert(!externalCatalog.getTable(DATABASE, 
TEST_TABLE_NAME).schemaPreservesCase)
     val rawTable = client.getTable(DATABASE, TEST_TABLE_NAME)
     
assert(rawTable.properties.filterKeys(_.startsWith(DATASOURCE_SCHEMA_PREFIX)).isEmpty)
+
+    // Add partition records (if specified)
+    if (!partitionCols.isEmpty) {
+      spark.catalog.recoverPartitions(TEST_TABLE_NAME)
+    }

Review comment:
       Moving the code here by the same reason as at 
https://github.com/apache/spark/pull/31066/files#r556497006. 
`sparkSession.table(tableIdent)` overrides catalog table fields according to 
test's settings at: 
https://github.com/apache/spark/blob/3fdbc48373cdf12b8ba05632bc65ad49b7af1afb/sql/core/src/main/scala/org/apache/spark/sql/internal/CatalogImpl.scala#L517
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to