[ 
https://issues.apache.org/jira/browse/HIVE-7710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chun Chen updated HIVE-7710:
----------------------------

    Status: Patch Available  (was: Open)

> Rename table across database might fail
> ---------------------------------------
>
>                 Key: HIVE-7710
>                 URL: https://issues.apache.org/jira/browse/HIVE-7710
>             Project: Hive
>          Issue Type: Bug
>            Reporter: Chun Chen
>            Assignee: Chun Chen
>         Attachments: HIVE-7710.patch
>
>
> If there is already a table d1.t2, the following rename statement would fail. 
> {code}
> alter table  d1.t1 rename to d2.t2; 
> //Exception
> 2014-08-13 03:32:40,512 ERROR Datastore.Persist (Log4JLogger.java:error(115)) 
> - Update of object "org.apache.hadoop.hive.metastore.model.MTable@729c5167" 
> using statement "UPDATE TBLS SET TBL_NAME=? WHERE TBL_ID=?" failed : 
> java.sql.SQLIntegrityConstraintViolationException: The statement was aborted 
> because it would have caused a duplicate key value in a unique or primary key 
> constraint or unique index identified by 'UNIQUETABLE' defined on 'TBLS'.
>       at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown 
> Source)
>       at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown 
> Source)
>       at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown 
> Source)
>       at 
> org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown 
> Source)
>       at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown 
> Source)
>       at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown 
> Source)
>       at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement(Unknown 
> Source)
>       at 
> org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement(Unknown 
> Source)
>       at 
> org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeLargeUpdate(Unknown 
> Source)
>       at 
> org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeUpdate(Unknown 
> Source)
>       at 
> com.jolbox.bonecp.PreparedStatementHandle.executeUpdate(PreparedStatementHandle.java:205)
>       at 
> org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeUpdate(ParamLoggingPreparedStatement.java:399)
>       at 
> org.datanucleus.store.rdbms.SQLController.executeStatementUpdate(SQLController.java:439)
>       at 
> org.datanucleus.store.rdbms.request.UpdateRequest.execute(UpdateRequest.java:374)
>       at 
> org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateTable(RDBMSPersistenceHandler.java:417)
>       at 
> org.datanucleus.store.rdbms.RDBMSPersistenceHandler.updateObject(RDBMSPersistenceHandler.java:390)
>       at 
> org.datanucleus.state.JDOStateManager.flush(JDOStateManager.java:5027)
>       at org.datanucleus.flush.FlushOrdered.execute(FlushOrdered.java:106)
>       at 
> org.datanucleus.ExecutionContextImpl.flushInternal(ExecutionContextImpl.java:4119)
>       at 
> org.datanucleus.ExecutionContextThreadedImpl.flushInternal(ExecutionContextThreadedImpl.java:450)
>       at 
> org.datanucleus.ExecutionContextImpl.markDirty(ExecutionContextImpl.java:3879)
>       at 
> org.datanucleus.ExecutionContextThreadedImpl.markDirty(ExecutionContextThreadedImpl.java:422)
>       at 
> org.datanucleus.state.JDOStateManager.postWriteField(JDOStateManager.java:4815)
>       at 
> org.datanucleus.state.JDOStateManager.replaceField(JDOStateManager.java:3356)
>       at 
> org.datanucleus.state.JDOStateManager.updateField(JDOStateManager.java:2018)
>       at 
> org.datanucleus.state.JDOStateManager.setStringField(JDOStateManager.java:1791)
>       at 
> org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoSetlocation(MStorageDescriptor.java)
>       at 
> org.apache.hadoop.hive.metastore.model.MStorageDescriptor.setLocation(MStorageDescriptor.java:88)
>       at 
> org.apache.hadoop.hive.metastore.ObjectStore.copyMSD(ObjectStore.java:2699)
>       at 
> org.apache.hadoop.hive.metastore.ObjectStore.alterTable(ObjectStore.java:2572)
>       at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at 
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98)
>       at com.sun.proxy.$Proxy6.alterTable(Unknown Source)
>       at 
> org.apache.hadoop.hive.metastore.HiveAlterHandler.alterTable(HiveAlterHandler.java:205)
>       at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_table_with_environment_context(HiveMetaStore.java:2771)
>       at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105)
>       at com.sun.proxy.$Proxy8.alter_table_with_environment_context(Unknown 
> Source)
>       at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:293)
>       at 
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.alter_table(SessionHiveMetaStoreClient.java:201)
>       at 
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.alter_table(HiveMetaStoreClient.java:288)
>       at sun.reflect.GeneratedMethodAccessor18.invoke(Unknown Source)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at 
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89)
>       at com.sun.proxy.$Proxy9.alter_table(Unknown Source)
>       at org.apache.hadoop.hive.ql.metadata.Hive.alterTable(Hive.java:404)
>       at org.apache.hadoop.hive.ql.exec.DDLTask.alterTable(DDLTask.java:3542)
>       at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:318)
>       at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:161)
>       at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85)
>       at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1538)
>       at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1305)
>       at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1118)
>       at org.apache.hadoop.hive.ql.Driver.run(Driver.java:942)
>       at org.apache.hadoop.hive.ql.Driver.run(Driver.java:932)
>       at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:246)
>       at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:198)
>       at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:408)
>       at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:344)
>       at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:833)
>       at 
> org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:136)
>       at 
> org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_rename_table(TestCliDriver.java:120)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>       at java.lang.reflect.Method.invoke(Method.java:597)
>       at junit.framework.TestCase.runTest(TestCase.java:168)
>       at junit.framework.TestCase.runBare(TestCase.java:134)
>       at junit.framework.TestResult$1.protect(TestResult.java:110)
>       at junit.framework.TestResult.runProtected(TestResult.java:128)
>       at junit.framework.TestResult.run(TestResult.java:113)
>       at junit.framework.TestCase.run(TestCase.java:124)
>       at junit.framework.TestSuite.runTest(TestSuite.java:243)
>       at junit.framework.TestSuite.run(TestSuite.java:238)
>       at 
> org.junit.internal.runners.JUnit38ClassRunner.run(JUnit38ClassRunner.java:83)
>       at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:264)
>       at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:153)
>       at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:124)
>       at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:200)
>       at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:153)
>       at 
> org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:103)
> Caused by: java.sql.SQLException: The statement was aborted because it would 
> have caused a duplicate key value in a unique or primary key constraint or 
> unique index identified by 'UNIQUETABLE' defined on 'TBLS'.
>       at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
>       at 
> org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown
>  Source)
>       ... 86 more
> Caused by: ERROR 23505: The statement was aborted because it would have 
> caused a duplicate key value in a unique or primary key constraint or unique 
> index identified by 'UNIQUETABLE' defined on 'TBLS'.
>       at org.apache.derby.iapi.error.StandardException.newException(Unknown 
> Source)
>       at 
> org.apache.derby.impl.sql.execute.IndexChanger.insertAndCheckDups(Unknown 
> Source)
>       at org.apache.derby.impl.sql.execute.IndexChanger.finish(Unknown Source)
>       at org.apache.derby.impl.sql.execute.IndexSetChanger.finish(Unknown 
> Source)
>       at org.apache.derby.impl.sql.execute.RowChangerImpl.finish(Unknown 
> Source)
>       at org.apache.derby.impl.sql.execute.UpdateResultSet.open(Unknown 
> Source)
>       at 
> org.apache.derby.impl.sql.GenericPreparedStatement.executeStmt(Unknown Source)
>       at org.apache.derby.impl.sql.GenericPreparedStatement.execute(Unknown 
> Source)
>       ... 80 more
> {code}
> And in HiveAlterHandler#alterTable we should check if rename hdfs directory 
> succeed.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to