lirui-apache commented on a change in pull request #3790:
URL: https://github.com/apache/iceberg/pull/3790#discussion_r778791284



##########
File path: 
hive-metastore/src/test/java/org/apache/iceberg/hive/TestHiveMetastore.java
##########
@@ -144,13 +177,33 @@ public void stop() {
     if (executorService != null) {
       executorService.shutdown();
     }
-    if (hiveLocalDir != null) {
-      hiveLocalDir.delete();
-    }
     if (baseHandler != null) {
       baseHandler.shutdown();
     }
     METASTORE_THREADS_SHUTDOWN.invoke();
+    HMS_HANDLER_THREAD_LOCAL_CONF.remove();
+    HMS_HANDLER_THREAD_LOCAL_TXN.remove();

Review comment:
       I meant spark 2.4 relies on hive 1.2.1. But in the latest CI, spark2 
passed. I think it's not deterministic due to how tests are scheduled and how 
JVMs are reused. Seems we can force the problem to reproduce. E.g. I added this 
test case in `TestIdentityPartitionData` in spark 2.4 to do a restart of hive 
metastore:
   ```java
     @Test
     public void test() throws Exception {
       testFullProjection();
       stopMetastoreAndSpark();
       startMetastoreAndSpark();
       testFullProjection();
     }
   ```
   And run the test with `./gradlew -DsparkVersions=2.4 
:iceberg-spark:iceberg-spark2:test --tests 
org.apache.iceberg.spark.source.TestIdentityPartitionData`. The test would fail 
with:
   
   ```
   org.apache.iceberg.spark.source.TestIdentityPartitionData > test[format = 
parquet, vectorized = false] FAILED
       java.lang.RuntimeException: Cannot start TestHiveMetastore
           at 
org.apache.iceberg.hive.TestHiveMetastore.start(TestHiveMetastore.java:134)
           at 
org.apache.iceberg.hive.TestHiveMetastore.start(TestHiveMetastore.java:97)
           at 
org.apache.iceberg.spark.SparkTestBase.startMetastoreAndSpark(SparkTestBase.java:57)
           at 
org.apache.iceberg.spark.source.TestIdentityPartitionData.test(TestIdentityPartitionData.java:143)
   
           Caused by:
           javax.jdo.JDOException: Exception thrown when executing query
           NestedThrowables:
           java.sql.SQLException: Container 2,177 not found.
               at 
org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:596)
               at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:252)
               at 
org.apache.hadoop.hive.metastore.ObjectStore.getMRole(ObjectStore.java:3531)
               at 
org.apache.hadoop.hive.metastore.ObjectStore.addRole(ObjectStore.java:3221)
               at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
               at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
               at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
               at java.lang.reflect.Method.invoke(Method.java:498)
               at 
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:114)
               at com.sun.proxy.$Proxy16.addRole(Unknown Source)
               at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultRoles_core(HiveMetaStore.java:656)
               at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultRoles(HiveMetaStore.java:648)
               at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:462)
               at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:419)
               at 
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.<init>(HiveMetaStore.java:412)
   ```
   And the stack trace matches hive 1.2.1:
   
https://github.com/apache/hive/blob/release-1.2.1/metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java#L462




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to