[
https://issues.apache.org/jira/browse/HIVE-5457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14376606#comment-14376606
]
Thiruvel Thirumoolan commented on HIVE-5457:
--------------------------------------------
[~andy]
We ran into the same issue with Hive-13. It happened from
get_partitions_by_filter() and get_partition_names_ps() API, although while
getting the table object. We use datanucleus versions 3.0.x. In our case, we
run multiple MetaStore servers behind load balancer. We also have
datanucleus.autoStartMechanism=SchemaTable since our backend is Oracle. In our
case, until the server was restarted, the two Thrift APIs mentioned above kept
failing which caused a lot of workflows to fail. This has happened twice in the
last month. Any ideas/suggestions?
2015-03-20 06:16:07,652 INFO [pool-2-thread-165] metastore.HiveMetaStore
(HiveMetaStore.java:logInfo(637)) - 67: get_partitions_by_filter db=<db>
tbl=<tbl> filter=[dt="2015032001"]
2015-03-20 06:16:07,715 ERROR [pool-2-thread-165] metastore.ObjectStore
(ObjectStore.java:run(2681)) -
Invalid index 1 for DataStoreMapping.
org.datanucleus.exceptions.NucleusException: Invalid index 1 for
DataStoreMapping.
at
org.datanucleus.store.mapped.mapping.PersistableMapping.getDatastoreMapping(PersistableMapping.java:309)
at
org.datanucleus.store.rdbms.scostore.BackingStoreHelper.appendWhereClauseForMapping(BackingStoreHelper.java:513)
at
org.datanucleus.store.rdbms.scostore.ElementContainerStore.getSizeStmt(ElementContainerStore.java:765)
at
org.datanucleus.store.rdbms.scostore.ElementContainerStore.getSize(ElementContainerStore.java:624)
at
org.datanucleus.store.rdbms.scostore.ElementContainerStore.size(ElementContainerStore.java:453)
at org.datanucleus.store.types.sco.backed.List.size(List.java:542)
at
org.apache.hadoop.hive.metastore.ObjectStore.convertToSkewedValues(ObjectStore.java:1195)
at
org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1173)
at
org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1183)
at
org.apache.hadoop.hive.metastore.ObjectStore.convertToTable(ObjectStore.java:1040)
at
org.apache.hadoop.hive.metastore.ObjectStore.ensureGetTable(ObjectStore.java:2823)
at
org.apache.hadoop.hive.metastore.ObjectStore.access$1200(ObjectStore.java:153)
at
org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.start(ObjectStore.java:2692)
at
org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2664)
at
org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilterInternal(ObjectStore.java:2781)
at
org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByFilter(ObjectStore.java:2624)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:108)
at $Proxy8.getPartitionsByFilter(Unknown Source)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_filter(HiveMetaStore.java:4326)
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_by_filter.getResult(ThriftHiveMetastore.java:9356)
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_by_filter.getResult(ThriftHiveMetastore.java:9340)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server$TUGIAssumingProcessor$2.run(HadoopThriftAuthBridge20S.java:706)
at
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server$TUGIAssumingProcessor$2.run(HadoopThriftAuthBridge20S.java:702)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1694)
at
org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge20S.java:702)
at
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run_aroundBody0(TThreadPoolServer.java:207)
at
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run_aroundBody1$advice(TThreadPoolServer.java:101)
at
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:1)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
> Concurrent calls to getTable() result in: MetaException:
> org.datanucleus.exceptions.NucleusException: Invalid index 1 for
> DataStoreMapping. NucleusException: Invalid index 1 for DataStoreMapping
> ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
> Key: HIVE-5457
> URL: https://issues.apache.org/jira/browse/HIVE-5457
> Project: Hive
> Issue Type: Bug
> Components: Metastore
> Affects Versions: 0.10.0
> Reporter: Lenni Kuff
> Priority: Critical
>
> Concurrent calls to getTable() result in: MetaException:
> org.datanucleus.exceptions.NucleusException: Invalid index 1 for
> DataStoreMapping. NucleusException: Invalid index 1 for DataStoreMapping
> This happens when using a Hive Metastore Service directly connecting to the
> backend metastore db. I have been able to hit this with as few as 2
> concurrent calls. When I update my app to serialize all calls to getTable()
> this problem is resolved.
> Stack Trace:
> {code}
> Caused by: org.datanucleus.exceptions.NucleusException: Invalid index 1 for
> DataStoreMapping.
> at
> org.datanucleus.store.mapped.mapping.PersistableMapping.getDatastoreMapping(PersistableMapping.java:307)
> at
> org.datanucleus.store.rdbms.scostore.RDBMSElementContainerStoreSpecialization.getSizeStmt(RDBMSElementContainerStoreSpecialization.java:407)
> at
> org.datanucleus.store.rdbms.scostore.RDBMSElementContainerStoreSpecialization.getSize(RDBMSElementContainerStoreSpecialization.java:257)
> at
> org.datanucleus.store.rdbms.scostore.RDBMSJoinListStoreSpecialization.getSize(RDBMSJoinListStoreSpecialization.java:46)
> at
> org.datanucleus.store.mapped.scostore.ElementContainerStore.size(ElementContainerStore.java:440)
> at org.datanucleus.sco.backed.List.size(List.java:557)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.convertToSkewedValues(ObjectStore.java:1029)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1007)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1017)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.convertToTable(ObjectStore.java:872)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:743)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at
> org.apache.hadoop.hive.metastore.RetryingRawStore.invoke(RetryingRawStore.java:111)
> at $Proxy6.getTable(Unknown Source)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1349)
> {code}
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)