[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17442037#comment-17442037 ] Terry Han commented on HIVE-16839: -- Hi [~yguang11] , I also meet this question, I think this patch only handles the exception, but the exception still exists when concurrency occurs. Is that right? > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0, 2.3.4, 3.0.0 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Fix For: 3.2.0, 4.0.0 > > Attachments: HIVE-16839.01.patch, HIVE-16839.02.patch, > HIVE-16839.03.patch > > Time Spent: 20m > Remaining Estimate: 0h > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16680476#comment-16680476 ] Vihang Karajgaonkar commented on HIVE-16839: yeah, we haven't had a release on branch-1 and below for a long time so I won't count on it. You might be able to manually port the patch on your installation since it is a simple patch. > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0, 3.0.0, 2.3.4 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-16839.01.patch, HIVE-16839.02.patch, > HIVE-16839.03.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16679139#comment-16679139 ] Vihang Karajgaonkar commented on HIVE-16839: v3 patch looks good to me. +1 > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Attachments: HIVE-16839.01.patch, HIVE-16839.02.patch, > HIVE-16839.03.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16680323#comment-16680323 ] Guang Yang commented on HIVE-16839: --- Thanks [~vihangk1] ! We are actually running hive 0.13.1, I am not sure if we still have release line for that. As you mentioned, for old versions, we may just remove the transaction stuff for this function? Anyway I will work with our support team to figure out a way to patch the version we are running. > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0, 3.0.0, 2.3.4 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-16839.01.patch, HIVE-16839.02.patch, > HIVE-16839.03.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16680293#comment-16680293 ] Vihang Karajgaonkar commented on HIVE-16839: [~yguang11] there were many conflicts when I tries to apply the patch on branch-2. If you want this to be fixed in branch-2, please attach a patch and I would be happy to commit it. > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0, 3.0.0, 2.3.4 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0, 3.2.0 > > Attachments: HIVE-16839.01.patch, HIVE-16839.02.patch, > HIVE-16839.03.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16680291#comment-16680291 ] Vihang Karajgaonkar commented on HIVE-16839: Patch committed to master and branch-3. Thanks for your contribution [~yguang11] > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Attachments: HIVE-16839.01.patch, HIVE-16839.02.patch, > HIVE-16839.03.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16678786#comment-16678786 ] Guang Yang commented on HIVE-16839: --- Hi [~vihangk1], updated the unit test per suggestion. Looks like the new run passed, could you help to commit the change? Thanks for the help on this ! > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Attachments: HIVE-16839.01.patch, HIVE-16839.02.patch, > HIVE-16839.03.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16678774#comment-16678774 ] Hive QA commented on HIVE-16839: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12947197/HIVE-16839.03.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 15527 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/14796/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/14796/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-14796/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12947197 - PreCommit-HIVE-Build > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Attachments: HIVE-16839.01.patch, HIVE-16839.02.patch, > HIVE-16839.03.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16678727#comment-16678727 ] Hive QA commented on HIVE-16839: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 6s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 3s{color} | {color:blue} standalone-metastore/metastore-server in master has 185 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 34s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-14796/dev-support/hive-personality.sh | | git revision | master / 6d713b6 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: standalone-metastore/metastore-server U: standalone-metastore/metastore-server | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-14796/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Attachments: HIVE-16839.01.patch, HIVE-16839.02.patch, > HIVE-16839.03.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677469#comment-16677469 ] Vihang Karajgaonkar commented on HIVE-16839: Yeah, the test failures look unrelated. But I cannot commit the patch until we have a green +1 from the precommit job as a matter of commit policy. Can you please reattach the patch with a version 03. Also can you change the timeout to a reasonable value like 30 sec? {noformat} executorService.awaitTermination(Long.MAX_VALUE, TimeUnit.DAYS.NANOSECONDS); {noformat} > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Attachments: HIVE-16839.01.patch, HIVE-16839.02.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677461#comment-16677461 ] ASF GitHub Bot commented on HIVE-16839: --- Github user guangyy closed the pull request at: https://github.com/apache/hive/pull/453 > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Attachments: HIVE-16839.01.patch, HIVE-16839.02.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16677065#comment-16677065 ] Guang Yang commented on HIVE-16839: --- Hey [~vihangk1], thanks for the suggestions. The test failure doesn't seem to relate to this patch. Could you help to take a look at the change ? Thanks ! > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Attachments: HIVE-16839.01.patch, HIVE-16839.02.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676378#comment-16676378 ] Hive QA commented on HIVE-16839: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12946982/HIVE-16839.02.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 4 failed/errored test(s), 15520 tests executed *Failed tests:* {noformat} TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=196) [druidmini_masking.q,druidmini_test1.q,druidkafkamini_basic.q,druidmini_joins.q,druid_timestamptz.q] org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitions (batchId=258) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerCustomCreatedDynamicPartitionsUnionAll (batchId=258) org.apache.hive.jdbc.TestTriggersTezSessionPoolManager.testTriggerHighShuffleBytes (batchId=258) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/14762/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/14762/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-14762/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 4 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12946982 - PreCommit-HIVE-Build > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Attachments: HIVE-16839.01.patch, HIVE-16839.02.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317)
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16676314#comment-16676314 ] Hive QA commented on HIVE-16839: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 7s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 3s{color} | {color:blue} standalone-metastore/metastore-server in master has 185 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-14762/dev-support/hive-personality.sh | | git revision | master / 353c55e | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: standalone-metastore/metastore-server U: standalone-metastore/metastore-server | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-14762/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Attachments: HIVE-16839.01.patch, HIVE-16839.02.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16674261#comment-16674261 ] Hive QA commented on HIVE-16839: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12946774/HIVE-16839.01.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/14738/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/14738/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-14738/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Tests exited with: Exception: Patch URL https://issues.apache.org/jira/secure/attachment/12946774/HIVE-16839.01.patch was found in seen patch url's cache and a test was probably run already on it. Aborting... {noformat} This message is automatically generated. ATTACHMENT ID: 12946774 - PreCommit-HIVE-Build > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Attachments: HIVE-16839.01.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16674230#comment-16674230 ] Hive QA commented on HIVE-16839: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12946774/HIVE-16839.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15524 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.metastore.TestObjectStore.testConcurrentDropPartitions (batchId=231) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/14732/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/14732/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-14732/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12946774 - PreCommit-HIVE-Build > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Attachments: HIVE-16839.01.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16674214#comment-16674214 ] Hive QA commented on HIVE-16839: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 47s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 6s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 5s{color} | {color:blue} standalone-metastore/metastore-server in master has 185 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 37s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-14732/dev-support/hive-personality.sh | | git revision | master / ae1eb15 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: standalone-metastore/metastore-server U: standalone-metastore/metastore-server | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-14732/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Attachments: HIVE-16839.01.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673840#comment-16673840 ] Vihang Karajgaonkar commented on HIVE-16839: The patch name should be {{HIVE-16839.01.patch}} > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 0.13.1, 1.1.0 >Reporter: Nemon Lou >Assignee: Guang Yang >Priority: Major > Labels: pull-request-available > Attachments: > 0001-HIVE-16839-Fix-a-race-condidtion-during-concurrent-p.patch > > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673816#comment-16673816 ] ASF GitHub Bot commented on HIVE-16839: --- GitHub user guangyy opened a pull request: https://github.com/apache/hive/pull/484 HIVE-16839: Fix a race condidtion during concurrent partition drops We have seen a leaked lock on hive metastore DB which caused all PARTITION insertion failed on timeout waiting for lock until the metastore service is restarted. A transaction dump on the DB shows there is a thread that is Sleep which potentiall holds the the lock, like: ``` trx_id: 33603171058 trx_state: RUNNING trx_started: 2018-10-23 06:43:22 trx_requested_lock_id: NULL trx_wait_started: NULL trx_weight: 70298 trx_mysql_thread_id: 275402202 trx_query: NULL trx_operation_state: NULL trx_tables_in_use: 0 trx_tables_locked: 0 trx_lock_structs: 21286 trx_lock_memory_bytes: 2881064 trx_rows_locked: 98810 trx_rows_modified: 49012 trx_concurrency_tickets: 0 trx_isolation_level: READ COMMITTED trx_unique_checks: 1 trx_foreign_key_checks: 1 trx_last_foreign_key_error: NULL trx_adaptive_hash_latched: 0 trx_adaptive_hash_timeout: 0 trx_is_read_only: 0 trx_autocommit_non_locking: 0 ID: 275402202 USER: metastore_gold HOST: 10.37.182.82:36684 DB: metastoregold COMMAND: Sleep TIME: 1 STATE: INFO: NULL duration: 1316 Given the HOST ip, we trace back to the hive metastore instance and found the following exceptions: No such database row org.datanucleus.exceptions.NucleusObjectNotFoundException: No such database row at org.datanucleus.store.rdbms.request.FetchRequest.execute(FetchRequest.java:357) at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.fetchObject(RDBMSPersistenceHandler.java:324) at org.datanucleus.state.AbstractStateManager.loadFieldsFromDatastore(AbstractStateManager.java:1120) at org.datanucleus.state.JDOStateManager.loadSpecifiedFields(JDOStateManager.java:2916) at org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3219) ``` The problem is that the caller expects a NULL if the partition does not exist, however, the convertToPart function would throw an exception which lead to the leak. You can merge this pull request into a Git repository by running: $ git pull https://github.com/guangyy/hive HIVE-16839 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hive/pull/484.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #484 commit 5137027ee658990dd1503c09c13a73e2848d8deb Author: Guang Yang Date: 2018-11-02T23:21:35Z HIVE-16839: Fix a race condidtion during concurrent partition drops > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: Nemon Lou >Assignee: Vihang Karajgaonkar >Priority: Major > Labels: pull-request-available > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16673797#comment-16673797 ] Guang Yang commented on HIVE-16839: --- Thanks [~vihangk1], I am happy to take on this. > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: Nemon Lou >Assignee: Vihang Karajgaonkar >Priority: Major > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16669255#comment-16669255 ] Vihang Karajgaonkar commented on HIVE-16839: Hi [~yguang11] since you have created a PR, would you like to take ownership of this JIRA? I can help review and commit it. Else, I can simply create a patch based off your patch and attach it here for master branch (you will off-course get the credit for contribution ;)) I would recommend attaching a patch for master branch as I suggested in the PR review and I we can then manually port it to branch-3, branch-2 and branch-1 once the tests on it pass. > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: Nemon Lou >Assignee: Vihang Karajgaonkar >Priority: Major > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16664114#comment-16664114 ] Guang Yang commented on HIVE-16839: --- We have seen the similar issue running Hive 0.13, opened a PR: https://github.com/apache/hive/pull/453 > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: Nemon Lou >Assignee: Vihang Karajgaonkar >Priority: Major > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061759#comment-16061759 ] Nemon Lou commented on HIVE-16839: -- Our system does not support concurrency. When users submit both drop partition and modify the same partition concurrently by accident,then got uncommitted transaction. For postgresql as backend,there will be a connection in state of idle in transaction. > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: Nemon Lou >Assignee: Vihang Karajgaonkar > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) > at >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16061678#comment-16061678 ] Vihang Karajgaonkar commented on HIVE-16839: Hi [~nemon] What is the value of {{hive.support.concurrency}} on your system? I think this issue is more related to concurrency and not related to unbalanced calls to open and commit transaction. When the concurrency is turned off both the session will be free to proceed without acquiring any ZK locks and hence the exception on one of the sessions. The trace shows that it is trying to get the StorageDescriptor of the partition while the other session has already dropped the partition. When you turn on the concurrency, the drop partition session will wait since until it acquires the lock and then proceed as expected. > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: Nemon Lou >Assignee: Vihang Karajgaonkar > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16042020#comment-16042020 ] Nemon Lou commented on HIVE-16839: -- I have assigned it to you.Thanks. > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: Nemon Lou >Assignee: Vihang Karajgaonkar > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) >
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16041223#comment-16041223 ] Vihang Karajgaonkar commented on HIVE-16839: I [~nemon] I can take a look at this if you are not actively working on this. > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: Nemon Lou > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1673) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745)
[jira] [Commented] (HIVE-16839) Unbalanced calls to openTransaction/commitTransaction when alter the same partition concurrently
[ https://issues.apache.org/jira/browse/HIVE-16839?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16040007#comment-16040007 ] Nemon Lou commented on HIVE-16839: -- Seems that we need a rollbackTransaction in method getPartition for ObjectStore: {code:java} @Override public Partition getPartition(String dbName, String tableName, List part_vals) throws NoSuchObjectException, MetaException { openTransaction(); Partition part = convertToPart(getMPartition(dbName, tableName, part_vals)); commitTransaction(); if(part == null) { throw new NoSuchObjectException("partition values=" + part_vals.toString()); } part.setValues(part_vals); return part; } {code} > Unbalanced calls to openTransaction/commitTransaction when alter the same > partition concurrently > > > Key: HIVE-16839 > URL: https://issues.apache.org/jira/browse/HIVE-16839 > Project: Hive > Issue Type: Bug >Affects Versions: 1.1.0 >Reporter: Nemon Lou > > SQL to reproduce: > prepare: > {noformat} > hdfs dfs -mkdir -p > /hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627 > 1,create external table tb_ltgsm_external (id int) PARTITIONED by (cp > string,ld string); > {noformat} > open one beeline run these two sql many times > {noformat} 2,ALTER TABLE tb_ltgsm_external ADD IF NOT EXISTS PARTITION > (cp=2017060513,ld=2017060610); > 3,ALTER TABLE tb_ltgsm_external PARTITION (cp=2017060513,ld=2017060610) SET > LOCATION > 'hdfs://hacluster/hzsrc/external/writing_dc/ltgsm/16e7a9b2-21a1-3f4f-8061-bc3395281627'; > {noformat} > open another beeline to run this sql many times at the same time. > {noformat} > 4,ALTER TABLE tb_ltgsm_external DROP PARTITION (cp=2017060513,ld=2017060610); > {noformat} > MetaStore logs: > {noformat} > 2017-06-06 21:58:34,213 | ERROR | pool-6-thread-197 | Retrying HMSHandler > after 2000 ms (attempt 1 of 10) with error: > javax.jdo.JDOObjectNotFoundException: No such database row > FailedObject:49[OID]org.apache.hadoop.hive.metastore.model.MStorageDescriptor > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:475) > at > org.datanucleus.api.jdo.JDOAdapter.getApiExceptionForNucleusException(JDOAdapter.java:1158) > at > org.datanucleus.state.JDOStateManager.isLoaded(JDOStateManager.java:3231) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.jdoGetcd(MStorageDescriptor.java) > at > org.apache.hadoop.hive.metastore.model.MStorageDescriptor.getCD(MStorageDescriptor.java:184) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1282) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1299) > at > org.apache.hadoop.hive.metastore.ObjectStore.convertToPart(ObjectStore.java:1680) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartition(ObjectStore.java:1586) > at sun.reflect.GeneratedMethodAccessor35.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:98) > at com.sun.proxy.$Proxy0.getPartition(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:538) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:3317) > at sun.reflect.GeneratedMethodAccessor37.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:497) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:102) > at com.sun.proxy.$Proxy12.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9963) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:9947) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at >