Running a HiveClient with create external table HBase
Hi, Currently I am facing random behavior while trying to create a java client for Hive hbase integration. Case: I am trying to create a hive table for existing HBase table. So i have started hiveserver via /hive -service hiveserver. In logs I can see it is printing by my sql with CREATE EXTERNAL TABLE. But somehow that table is not getting created in Hive. Interesting point is running the sql command from command line with Hive is running fine. This behavior is random. Sometimes it shows me all created tables in Hive(when I use SHOW TABLES). Does it have something to do with 'metastore_db'? Any idea? Register for Impetus Webinar on 'Building Highly Scalable and Flexible SaaS Solutions' on Dec 10 (10:00 a.m. PT). Click http://www.impetus.com to know more. Follow us on www.twitter.com/impetuscalling. NOTE: This message may contain information that is confidential, proprietary, privileged or otherwise protected by law. The message is intended solely for the named addressee. If received in error, please destroy and notify the sender. Any use of this email is prohibited when received in error. Impetus does not represent, warrant and/or guarantee, that the integrity of this communication has been maintained nor that the communication is free of errors, virus, interception or interference.
Does anyone get past TestEmbeededHiveMetaStore in ant test?
test: [junit] Running org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 34.192 sec [junit] BR.recoverFromMismatchedToken [junit] Running org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore [junit] Running metastore! [junit] Running metastore! [junit] org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:29083. [junit] at org.apache.thrift.transport.TServerSocket.init(TServerSocket.java:98) [junit] at org.apache.thrift.transport.TServerSocket.init(TServerSocket.java:79) [junit] at org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.init(TServerSocketKeepAlive.java:34) [junit] at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:2189) [junit] at org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore$RunMS.run(TestRemoteHiveMetaStore.java:35) [junit] at java.lang.Thread.run(Thread.java:619) [junit] Running org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec [junit] Test org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore FAILED (crashed) Does this test pass for anyone? If so did you need to do anything network wise to make this happen? Edward
Re: Does anyone get past TestEmbeededHiveMetaStore in ant test?
This happens because you have a slower machine :) In this test HiveMetaStore is brought up in separate thread and then current thread sleeps for 5 secs, hoping MetaStore will be up by then. On heavily loaded machine (or slow machines) 5 seconds may be too less. Change TestRemoteHiveMetaStore.java line number 51: -Thread.sleep(5000); +Thread.sleep(2); and your test should pass. Hope it helps, Ashutosh On Fri, Dec 3, 2010 at 10:50, Edward Capriolo edlinuxg...@gmail.com wrote: test: [junit] Running org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore [junit] Tests run: 11, Failures: 0, Errors: 0, Time elapsed: 34.192 sec [junit] BR.recoverFromMismatchedToken [junit] Running org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore [junit] Running metastore! [junit] Running metastore! [junit] org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:29083. [junit] at org.apache.thrift.transport.TServerSocket.init(TServerSocket.java:98) [junit] at org.apache.thrift.transport.TServerSocket.init(TServerSocket.java:79) [junit] at org.apache.hadoop.hive.metastore.TServerSocketKeepAlive.init(TServerSocketKeepAlive.java:34) [junit] at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:2189) [junit] at org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore$RunMS.run(TestRemoteHiveMetaStore.java:35) [junit] at java.lang.Thread.run(Thread.java:619) [junit] Running org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore [junit] Tests run: 1, Failures: 0, Errors: 1, Time elapsed: 0 sec [junit] Test org.apache.hadoop.hive.metastore.TestRemoteHiveMetaStore FAILED (crashed) Does this test pass for anyone? If so did you need to do anything network wise to make this happen? Edward
Re: Caused by: javax.jdo.JDODataStoreException: Exception thrown while querying indices for table=DBS: HELP need
Hi Tali, Did you run the metastore schema upgrade script? This is a requirement if you are upgrading from an older version of Hive to version 0.6. Thanks. Carl On Fri, Dec 3, 2010 at 1:31 PM, Tali K ncherr...@hotmail.com wrote: Hi All, We installed a new hive distribution - 0.6. We copied this 2 jars to the hive/lib directory. postgresql-9.0-801.jdbc4.jar jdo2-core-2.0.jar We are getting this exception Caused by: javax.jdo.JDODataStoreException: Exception thrown while querying indices for table=DBS NestedThrowables: org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:313) Please help!!! at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:186) Caused by: javax.jdo.JDODataStoreException: Exception thrown while querying indices for table=DBS NestedThrowables: org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block at org.datanucleus.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:313) at org.datanucleus.ObjectManagerImpl.getExtent(ObjectManagerImpl.java:4154) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compileCandidates(JDOQLQueryCompiler.java:411) at org.datanucleus.store.rdbms.query.legacy.QueryCompiler.executionCompile(QueryCompiler.java:312) at org.datanucleus.store.rdbms.query.legacy.JDOQLQueryCompiler.compile(JDOQLQueryCompiler.java:225) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.compileInternal(JDOQLQuery.java:175) at org.datanucleus.store.query.Query.executeQuery(Query.java:1628) at org.datanucleus.store.rdbms.query.legacy.JDOQLQuery.executeQuery(JDOQLQuery.java:245) at org.datanucleus.store.query.Query.executeWithArray(Query.java:1499) at org.datanucleus.jdo.JDOQuery.execute(JDOQuery.java:243) at org.apache.hadoop.hive.metastore.ObjectStore.getMDatabase(ObjectStore.java:322) at org.apache.hadoop.hive.metastore.ObjectStore.getDatabase(ObjectStore.java:341) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:359) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.access$200(HiveMetaStore.java:79) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$4.run(HiveMetaStore.java:381) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$4.run(HiveMetaStore.java:378) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:234) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:378) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:171) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:136) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.init(HiveMetaStoreClient.java:87) at org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:1269) at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:1279) at org.apache.hadoop.hive.ql.metadata.Hive.getTablesByPattern(Hive.java:603) ... 17 more Caused by: org.postgresql.util.PSQLException: ERROR: current transaction is aborted, commands ignored until end of transaction block at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2102) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1835) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:257) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:500) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:374) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeQuery(AbstractJdbc2Statement.java:254) at org.postgresql.jdbc2.AbstractJdbc2DatabaseMetaData.getIndexInfo(AbstractJdbc2DatabaseMetaData.java:4023) at org.apache.commons.dbcp.DelegatingDatabaseMetaData.getIndexInfo(DelegatingDatabaseMetaData.java:327) at org.datanucleus.store.rdbms.schema.RDBMSSchemaHandler.getRDBMSTableIndexInfoForTable(RDBMSSchemaHandler.java:616) at org.datanucleus.store.rdbms.schema.RDBMSSchemaHandler.getRDBMSTableIndexInfoForTable(RDBMSSchemaHandler.java:585) at org.datanucleus.store.rdbms.schema.RDBMSSchemaHandler.getSchemaData(RDBMSSchemaHandler.java:202) at