[
https://issues.apache.org/jira/browse/HIVE-25144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17359364#comment-17359364
]
David Mollitor commented on HIVE-25144:
---------------------------------------
And here is the logging...
{code:none}
2021-06-04 12:01:25,927 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-3]:
ugi=kudu/host@DOMAIN ip=xx.xx.xx.xx cmd=create_table:
Table(tableName:test_table, dbName:test_database, owner:user, createTime:0,
lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:...
tableType:MANAGED_TABLE, temporary:false, ownerType:USER)
2021-06-04 12:01:26,001 INFO org.apache.hadoop.hive.common.FileUtils:
[pool-9-thread-3]: Creating directory if it doesn't exist:
hdfs://ns1/user/hive/warehouse/test_database.db/test_table
2021-06-04 12:01:26,185 ERROR com.jolbox.bonecp.ConnectionHandle:
[pool-9-thread-3]: Database access problem. Killing off this connection and all
remaining connections in the connection pool. SQL State = 08S01
2021-06-04 12:01:26,294 INFO org.apache.hadoop.fs.TrashPolicyDefault:
[pool-9-thread-3]: Moved:
'hdfs://ns1/user/hive/warehouse/test_database.db/test_table' to trash at:
hdfs://ns1/user/.Trash/kudu/Current/user/hive/warehouse/test_database.db/test_table
2021-06-04 12:01:26,304 ERROR
org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-9-thread-3]:
Retrying HMSHandler after 2000 ms (attempt 1 of 10) with error:
javax.jdo.JDODataStoreException: Communications link failure
The last packet successfully received from the server was 1,521,446
milliseconds ago. The last packet sent successfully to the server was
1,521,447 milliseconds ago.
at
org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
at
org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:171)
at
org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:727)
at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
at com.sun.proxy.$Proxy26.commitTransaction(Unknown Source)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1582)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1615)
at sun.reflect.GeneratedMethodAccessor79.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
at com.sun.proxy.$Proxy28.create_table_with_environment_context(Unknown
Source)
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:10993)
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:10977)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:594)
at
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:589)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:589)
at
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
NestedThrowablesStackTrace:
com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link
failure
The last packet successfully received from the server was 1,521,446
milliseconds ago. The last packet sent successfully to the server was
1,521,447 milliseconds ago.
at sun.reflect.GeneratedConstructorAccessor84.newInstance(Unknown
Source)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
at
com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1121)
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3938)
at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2551)
at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2731)
at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2814)
at com.mysql.jdbc.ConnectionImpl.setAutoCommit(ConnectionImpl.java:5338)
at
com.jolbox.bonecp.ConnectionHandle.setAutoCommit(ConnectionHandle.java:1292)
at
org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl.getConnection(ConnectionFactoryImpl.java:423)
at
org.datanucleus.store.rdbms.SQLController.getStatementForQuery(SQLController.java:311)
at
org.datanucleus.store.rdbms.SQLController.getStatementForQuery(SQLController.java:295)
at
org.datanucleus.store.rdbms.scostore.JoinListStore.listIterator(JoinListStore.java:761)
at
org.datanucleus.store.rdbms.scostore.AbstractListStore.listIterator(AbstractListStore.java:93)
at
org.datanucleus.store.rdbms.scostore.AbstractListStore.iterator(AbstractListStore.java:83)
at
org.datanucleus.store.types.wrappers.backed.List.loadFromStore(List.java:264)
at
org.datanucleus.store.types.wrappers.backed.List.getValue(List.java:225)
at
org.datanucleus.store.types.wrappers.backed.List.getValue(List.java:70)
at org.datanucleus.store.types.SCOUtils.unwrapSCOField(SCOUtils.java:79)
at
org.datanucleus.store.fieldmanager.DetachFieldManager.internalFetchObjectField(DetachFieldManager.java:236)
at
org.datanucleus.store.fieldmanager.AbstractFetchDepthFieldManager.fetchObjectField(AbstractFetchDepthFieldManager.java:114)
at
org.datanucleus.state.StateManagerImpl.detach(StateManagerImpl.java:3571)
at
org.datanucleus.ExecutionContextImpl.detachObject(ExecutionContextImpl.java:2693)
at
org.datanucleus.ExecutionContextThreadedImpl.detachObject(ExecutionContextThreadedImpl.java:329)
at
org.datanucleus.store.fieldmanager.DetachFieldManager.processPersistable(DetachFieldManager.java:92)
at
org.datanucleus.store.fieldmanager.DetachFieldManager.internalFetchObjectField(DetachFieldManager.java:125)
at
org.datanucleus.store.fieldmanager.AbstractFetchDepthFieldManager.fetchObjectField(AbstractFetchDepthFieldManager.java:105)
at
org.datanucleus.state.StateManagerImpl.detach(StateManagerImpl.java:3571)
at
org.datanucleus.ExecutionContextImpl.performDetachAllOnTxnEnd(ExecutionContextImpl.java:4579)
at
org.datanucleus.ExecutionContextImpl.postCommit(ExecutionContextImpl.java:4616)
at
org.datanucleus.ExecutionContextImpl.transactionCommitted(ExecutionContextImpl.java:775)
at
org.datanucleus.TransactionImpl.internalPostCommit(TransactionImpl.java:559)
at org.datanucleus.TransactionImpl.commit(TransactionImpl.java:342)
at
org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:107)
at
org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:727)
at sun.reflect.GeneratedMethodAccessor34.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
at com.sun.proxy.$Proxy26.commitTransaction(Unknown Source)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1582)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1615)
at sun.reflect.GeneratedMethodAccessor79.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
at com.sun.proxy.$Proxy28.create_table_with_environment_context(Unknown
Source)
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:10993)
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:10977)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:594)
at
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:589)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:589)
at
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.SocketException: Connection reset
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:115)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at com.mysql.jdbc.MysqlIO.send(MysqlIO.java:3919)
... 58 more
2021-06-04 12:01:28,329 INFO org.apache.hadoop.hive.metastore.HiveMetaStore:
[pool-9-thread-3]: 3: create_table: Table(tableName:test_table,
dbName:test_database, owner:user, createTime:0, lastAccessTime:0, retention:0,
sd:StorageDescriptor(cols)],
location:hdfs://ns1/user/hive/warehouse/test_database.db/test_table,
inputFormat:, outputFormat:, compressed:false, numBuckets:0,
serdeInfo:SerDeInfo(name:, serializationLib:, parameters:{}), bucketCols:[],
sortCols:[], parameters:{}), partitionKeys:[],
parameters:{kudu.table_id=xxxxxxxxxxxx, transient_lastDdlTime=0,
kudu.master_addresses=xxxxxxxx,
storage_handler=org.apache.hadoop.hive.kudu.KuduStorageHandler,
kudu.table_name=test_database.test_table}, viewOriginalText:,
viewExpandedText:, tableType:MANAGED_TABLE, temporary:false, ownerType:USER)
2021-06-04 12:01:28,330 INFO
org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-9-thread-3]:
ugi=kudu/[email protected] ip=xx.xx.xx.xx cmd=create_table:
Table(tableName:test_table, dbName:test_database, owner:user, ...
tableType:MANAGED_TABLE, temporary:false, ownerType:USER)
2021-06-04 12:01:28,356 ERROR
org.apache.hadoop.hive.metastore.RetryingHMSHandler: [pool-9-thread-3]:
AlreadyExistsException(message:Table test_table already exists)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_core(HiveMetaStore.java:1528)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.create_table_with_environment_context(HiveMetaStore.java:1615)
at sun.reflect.GeneratedMethodAccessor79.invoke(Unknown Source)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99)
at com.sun.proxy.$Proxy28.create_table_with_environment_context(Unknown
Source)
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:10993)
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$create_table_with_environment_context.getResult(ThriftHiveMetastore.java:10977)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:594)
at
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor$1.run(HadoopThriftAuthBridge.java:589)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1875)
at
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge.java:589)
at
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
{code}
> Add NoReconnect Annotation to CreateXXX Methods With AlreadyExistsException
> ---------------------------------------------------------------------------
>
> Key: HIVE-25144
> URL: https://issues.apache.org/jira/browse/HIVE-25144
> Project: Hive
> Issue Type: Improvement
> Reporter: David Mollitor
> Assignee: David Mollitor
> Priority: Major
> Labels: pull-request-available
> Time Spent: 0.5h
> Remaining Estimate: 0h
>
> I have recently seen an issue where a Hive {{CREATE TABLE}} method fails with
> {{AlreadyExistsException}} even though the table does absolutely not exist.
>
> I believe the issue is there there is a timeout/transient error with HMS and
> the backend database. So, the client submits the request to HMS, and the
> request does eventually succeed, but only after the connection to the client
> connects. Therefore, when the HMS Client "retry" functionality kicks it, the
> second time around, the table looks like it already exists.
>
> If something goes wrong during a HMS CREATE operation, we do not know the
> state of the operation and therefore it should just fail.
>
> It would certainly be more transparent to the end-user what is going on. An
> {{AlreadyExistsException}} is confusing.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)