[
https://issues.apache.org/jira/browse/HIVE-27594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Harish updated HIVE-27594:
--------------------------
Description:
I set up hive/spark/hadoop(3.3.0) on same machine and hive meta on external PG
server
Hive 3.1.3,
PG 12 - remote meta, (tried 9.2 as well)
changed spark and hive site.xml
used schematool to populate default tables
USING oracle object storage as hadoop storage. I have replaced actual path with
place holder
When i try to open hive from terminal i see below error. I am total confused on
columns it refers in error vs in DBS table. But it works fine from spark.
Here CATALOG_NAME is not part of
[https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/sql/postgres/hive-schema-3.1.0.postgres.sql]
how come code is referring different structure?
2023-08-05 09:30:21,188 INFO objectstorage.ObjectStorageClient: Setting
endpoint to [https://oracle_ojectstorage|https://oracle_ojectstorage/]
2023-08-05 09:30:21,438 INFO jersey.JerseyHttpClientBuilder: Setting connector
provider to ApacheConnectorProvider 2023-08-05 09:30:21,548 INFO
store.BmcDataStore: Using upload configuration:
UploadConfiguration(minimumLengthForMultipartUpload=128,
lengthPerUploadPart=128, maxPartsForMultipartUpload=10000,
enforceMd5BeforeUpload=false, enforceMd5BeforeMultipartUpload=false,
allowMultipartUploads=true, allowParallelUploads=true, disableAutoAbort=false)
2023-08-05 09:30:21,551 INFO bmc.ClientRuntime: Using SDK:
Oracle-JavaSDK/3.17.1 2023-08-05 09:30:21,551 INFO bmc.ClientRuntime: User
agent set to: Oracle-JavaSDK/3.17.1 (Linux/3.10.0-1160.66.1.el7.x86_64;
Java/1.8.0_342; OpenJDK 64-Bit Server VM/25.342-b07)
Oracle-HDFS_Connector/3.3.4.1.2.0 2023-08-05 09:30:21,556 INFO
store.BmcDataStore: Object metadata caching disabled 2023-08-05 09:30:21,556
INFO store.BmcDataStore: fs.oci.caching.object.parquet.enabled is disabled,
setting parquet cache spec to 'maximumSize=0', which disables the cache
2023-08-05 09:30:21,557 INFO hdfs.BmcFilesystemImpl: Setting working directory
to oci://path/user/user, and initialized uri to oci://path 2023-08-05
09:30:21,570 INFO hdfs.BmcFilesystem: Physically closing delegate for
oci://path/ 2023-08-05 09:30:21,598 WARN metastore.HiveMetaStore: Retrying
creating default database after error: Exception thrown flushing changes to
datastore javax.jdo.JDODataStoreException: Exception thrown flushing changes to
datastore at
org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:171) at
org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:766)
at
org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:954)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) at
com.sun.proxy.$Proxy36.createDatabase(Unknown Source) at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:753)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:771)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:80)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
at
org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8678)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:169)
at
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:94)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at
org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:95)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
at
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:4306)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4374) at
org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4354) at
org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:4610) at
org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:291) at
org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:274)
at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:442) at
org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:382) at
org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:362) at
org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:331) at
org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry.init(HiveMaterializedViewsRegistry.java:133)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:755) at
org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.util.RunJar.run(RunJar.java:323) at
org.apache.hadoop.util.RunJar.main(RunJar.java:236) NestedThrowablesStackTrace:
java.sql.BatchUpdateException: Batch entry 0 INSERT INTO "DBS"
("DB_ID","CATALOG_NAME","DESC","DB_LOCATION_URI","NAME","OWNER_NAME","OWNER_TYPE")
VALUES (256,'hive','Default Hive
database','oci://path/','default','public','ROLE') was aborted. Call
getNextException to see the cause. at
org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:133)
at
org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:419)
at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2004)
at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:360) at
org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:1019) at
com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:125) at
com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)
at
org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeBatch(ParamLoggingPreparedStatement.java:366)
at
org.datanucleus.store.rdbms.SQLController.processConnectionStatement(SQLController.java:676)
at
org.datanucleus.store.rdbms.SQLController.processStatementsForConnection(SQLController.java:644)
at
org.datanucleus.store.rdbms.SQLController$1.transactionFlushed(SQLController.java:731)
at
org.datanucleus.store.connection.AbstractManagedConnection.transactionFlushed(AbstractManagedConnection.java:89)
at
org.datanucleus.store.connection.ConnectionManagerImpl$2.transactionFlushed(ConnectionManagerImpl.java:450)
at org.datanucleus.TransactionImpl.flush(TransactionImpl.java:210) at
org.datanucleus.TransactionImpl.commit(TransactionImpl.java:274) at
org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:107) at
org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:766)
at
org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:954)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) at
com.sun.proxy.$Proxy36.createDatabase(Unknown Source) at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:753)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:771)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:80)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
at
org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8678)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:169)
at
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:94)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at
org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:95)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
at
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:4306)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4374) at
org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4354) at
org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:4610) at
org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:291) at
org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:274)
at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:442) at
org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:382) at
org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:362) at
org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:331) at
org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry.init(HiveMaterializedViewsRegistry.java:133)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:755) at
org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.util.RunJar.run(RunJar.java:323) at
org.apache.hadoop.util.RunJar.main(RunJar.java:236) 2023-08-05 09:30:21,602
WARN metastore.ObjectStore: Failed to get database hive.default, returning
NoSuchObjectException 2023-08-05 09:30:21,605 ERROR
metastore.RetryingHMSHandler: Retrying HMSHandler after 2000 ms (attempt 1 of
10) with error: javax.jdo.JDODataStoreException: Exception thrown flushing
changes to datastore at
org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:171) at
org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:766)
at
org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:954)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) at
com.sun.proxy.$Proxy36.createDatabase(Unknown Source) at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:753)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:775)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:80)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
at
org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8678)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:169)
at
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:94)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at
org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:95)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
at
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:4306)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4374) at
org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4354) at
org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:4610) at
org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:291) at
org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:274)
at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:442) at
org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:382) at
org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:362) at
org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:331) at
org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry.init(HiveMaterializedViewsRegistry.java:133)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:755) at
org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.util.RunJar.run(RunJar.java:323) at
org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Hive -site:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
<property>
<name>hive.exec.scratchdir</name>
<value>/dd/tmp/</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoStartMechanism</name>
<value>SchemaTable</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
<description></description>
</property>
<property>
<name>spark.sql.warehouse.dir</name>
<value>oci://cx/</value>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>oci://cx/</value>
</property>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:postgresql://0:5432/hivemeta</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.postgresql.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>cccc</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>cc@3</value>
</property>
<property>
<name>hive.metastore.port</name>
<value>9083</value>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
</property>
<property>
<name>mapreduce.input.fileinputformat.input.dir.recursive</name>
<value>true</value>
</property>
<property>
<name>hive.server2.use.SSL</name>
<value>false</value>
</property>
</configuration>
was:
I set up hive/spark/hadoop(3.3.0) on same machine and hive meta on external PG
server
Hive 3.1.3,
PG 12 - remote meta, (tried 9.2 as well)
changed spark and hive site.xml
used schematool to populate default tables
USING oracle object storage as hadoop storage. I have replaced actual path with
place holder
When i try to open hive from terminal i see below error. I am total confused on
columns it refers in error vs in DBS table. But it works fine from spark.
Here CATALOG_NAME is not part of
[https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/sql/postgres/hive-schema-3.1.0.postgres.sql]
how come code is referring different structure?
2023-08-05 09:30:21,188 INFO objectstorage.ObjectStorageClient: Setting
endpoint to https://oracle_ojectstorage 2023-08-05 09:30:21,438 INFO
jersey.JerseyHttpClientBuilder: Setting connector provider to
ApacheConnectorProvider 2023-08-05 09:30:21,548 INFO store.BmcDataStore: Using
upload configuration: UploadConfiguration(minimumLengthForMultipartUpload=128,
lengthPerUploadPart=128, maxPartsForMultipartUpload=10000,
enforceMd5BeforeUpload=false, enforceMd5BeforeMultipartUpload=false,
allowMultipartUploads=true, allowParallelUploads=true, disableAutoAbort=false)
2023-08-05 09:30:21,551 INFO bmc.ClientRuntime: Using SDK:
Oracle-JavaSDK/3.17.1 2023-08-05 09:30:21,551 INFO bmc.ClientRuntime: User
agent set to: Oracle-JavaSDK/3.17.1 (Linux/3.10.0-1160.66.1.el7.x86_64;
Java/1.8.0_342; OpenJDK 64-Bit Server VM/25.342-b07)
Oracle-HDFS_Connector/3.3.4.1.2.0 2023-08-05 09:30:21,556 INFO
store.BmcDataStore: Object metadata caching disabled 2023-08-05 09:30:21,556
INFO store.BmcDataStore: fs.oci.caching.object.parquet.enabled is disabled,
setting parquet cache spec to 'maximumSize=0', which disables the cache
2023-08-05 09:30:21,557 INFO hdfs.BmcFilesystemImpl: Setting working directory
to oci://path/user/user, and initialized uri to oci://path 2023-08-05
09:30:21,570 INFO hdfs.BmcFilesystem: Physically closing delegate for
oci://path/ 2023-08-05 09:30:21,598 WARN metastore.HiveMetaStore: Retrying
creating default database after error: Exception thrown flushing changes to
datastore javax.jdo.JDODataStoreException: Exception thrown flushing changes to
datastore at
org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:171) at
org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:766)
at
org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:954)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) at
com.sun.proxy.$Proxy36.createDatabase(Unknown Source) at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:753)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:771)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:80)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
at
org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8678)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:169)
at
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:94)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at
org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:95)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
at
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:4306)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4374) at
org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4354) at
org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:4610) at
org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:291) at
org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:274)
at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:442) at
org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:382) at
org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:362) at
org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:331) at
org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry.init(HiveMaterializedViewsRegistry.java:133)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:755) at
org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.util.RunJar.run(RunJar.java:323) at
org.apache.hadoop.util.RunJar.main(RunJar.java:236) NestedThrowablesStackTrace:
java.sql.BatchUpdateException: Batch entry 0 INSERT INTO "DBS"
("DB_ID","CATALOG_NAME","DESC","DB_LOCATION_URI","NAME","OWNER_NAME","OWNER_TYPE")
VALUES (256,'hive','Default Hive
database','oci://path/','default','public','ROLE') was aborted. Call
getNextException to see the cause. at
org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:133)
at
org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:419)
at
org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2004)
at
org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:360) at
org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:1019) at
com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:125) at
com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)
at
org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeBatch(ParamLoggingPreparedStatement.java:366)
at
org.datanucleus.store.rdbms.SQLController.processConnectionStatement(SQLController.java:676)
at
org.datanucleus.store.rdbms.SQLController.processStatementsForConnection(SQLController.java:644)
at
org.datanucleus.store.rdbms.SQLController$1.transactionFlushed(SQLController.java:731)
at
org.datanucleus.store.connection.AbstractManagedConnection.transactionFlushed(AbstractManagedConnection.java:89)
at
org.datanucleus.store.connection.ConnectionManagerImpl$2.transactionFlushed(ConnectionManagerImpl.java:450)
at org.datanucleus.TransactionImpl.flush(TransactionImpl.java:210) at
org.datanucleus.TransactionImpl.commit(TransactionImpl.java:274) at
org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:107) at
org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:766)
at
org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:954)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) at
com.sun.proxy.$Proxy36.createDatabase(Unknown Source) at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:753)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:771)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:80)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
at
org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8678)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:169)
at
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:94)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at
org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:95)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
at
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:4306)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4374) at
org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4354) at
org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:4610) at
org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:291) at
org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:274)
at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:442) at
org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:382) at
org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:362) at
org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:331) at
org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry.init(HiveMaterializedViewsRegistry.java:133)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:755) at
org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.util.RunJar.run(RunJar.java:323) at
org.apache.hadoop.util.RunJar.main(RunJar.java:236) 2023-08-05 09:30:21,602
WARN metastore.ObjectStore: Failed to get database hive.default, returning
NoSuchObjectException 2023-08-05 09:30:21,605 ERROR
metastore.RetryingHMSHandler: Retrying HMSHandler after 2000 ms (attempt 1 of
10) with error: javax.jdo.JDODataStoreException: Exception thrown flushing
changes to datastore at
org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:171) at
org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:766)
at
org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:954)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97) at
com.sun.proxy.$Proxy36.createDatabase(Unknown Source) at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:753)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:775)
at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:80)
at
org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
at
org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8678)
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:169)
at
org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:94)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at
org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:95)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
at
org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
at
org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:4306)
at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4374) at
org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4354) at
org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:4610) at
org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:291) at
org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:274)
at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:442) at
org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:382) at
org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:362) at
org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:331) at
org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry.init(HiveMaterializedViewsRegistry.java:133)
at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:755) at
org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.hadoop.util.RunJar.run(RunJar.java:323) at
org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> Hive 3.1.3 with extremal PG meta referring wrong default column in DBS
> -----------------------------------------------------------------------
>
> Key: HIVE-27594
> URL: https://issues.apache.org/jira/browse/HIVE-27594
> Project: Hive
> Issue Type: Bug
> Reporter: Harish
> Priority: Critical
> Labels: HiveMetaStoreClient, hive
>
> I set up hive/spark/hadoop(3.3.0) on same machine and hive meta on external
> PG server
> Hive 3.1.3,
> PG 12 - remote meta, (tried 9.2 as well)
> changed spark and hive site.xml
> used schematool to populate default tables
> USING oracle object storage as hadoop storage. I have replaced actual path
> with place holder
> When i try to open hive from terminal i see below error. I am total confused
> on columns it refers in error vs in DBS table. But it works fine from spark.
> Here CATALOG_NAME is not part of
> [https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/main/sql/postgres/hive-schema-3.1.0.postgres.sql]
> how come code is referring different structure?
> 2023-08-05 09:30:21,188 INFO objectstorage.ObjectStorageClient: Setting
> endpoint to [https://oracle_ojectstorage|https://oracle_ojectstorage/]
> 2023-08-05 09:30:21,438 INFO jersey.JerseyHttpClientBuilder: Setting
> connector provider to ApacheConnectorProvider 2023-08-05 09:30:21,548 INFO
> store.BmcDataStore: Using upload configuration:
> UploadConfiguration(minimumLengthForMultipartUpload=128,
> lengthPerUploadPart=128, maxPartsForMultipartUpload=10000,
> enforceMd5BeforeUpload=false, enforceMd5BeforeMultipartUpload=false,
> allowMultipartUploads=true, allowParallelUploads=true,
> disableAutoAbort=false) 2023-08-05 09:30:21,551 INFO bmc.ClientRuntime: Using
> SDK: Oracle-JavaSDK/3.17.1 2023-08-05 09:30:21,551 INFO bmc.ClientRuntime:
> User agent set to: Oracle-JavaSDK/3.17.1 (Linux/3.10.0-1160.66.1.el7.x86_64;
> Java/1.8.0_342; OpenJDK 64-Bit Server VM/25.342-b07)
> Oracle-HDFS_Connector/3.3.4.1.2.0 2023-08-05 09:30:21,556 INFO
> store.BmcDataStore: Object metadata caching disabled 2023-08-05 09:30:21,556
> INFO store.BmcDataStore: fs.oci.caching.object.parquet.enabled is disabled,
> setting parquet cache spec to 'maximumSize=0', which disables the cache
> 2023-08-05 09:30:21,557 INFO hdfs.BmcFilesystemImpl: Setting working
> directory to oci://path/user/user, and initialized uri to oci://path
> 2023-08-05 09:30:21,570 INFO hdfs.BmcFilesystem: Physically closing delegate
> for oci://path/ 2023-08-05 09:30:21,598 WARN metastore.HiveMetaStore:
> Retrying creating default database after error: Exception thrown flushing
> changes to datastore javax.jdo.JDODataStoreException: Exception thrown
> flushing changes to datastore at
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
> at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:171) at
> org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:766)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:954)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
> at com.sun.proxy.$Proxy36.createDatabase(Unknown Source) at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:753)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:771)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:80)
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8678)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:169)
> at
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:94)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at
> org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:95)
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
> at
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:4306)
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4374) at
> org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4354) at
> org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:4610) at
> org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:291) at
> org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:274)
> at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:442) at
> org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:382) at
> org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:362) at
> org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:331) at
> org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry.init(HiveMaterializedViewsRegistry.java:133)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:755) at
> org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> org.apache.hadoop.util.RunJar.run(RunJar.java:323) at
> org.apache.hadoop.util.RunJar.main(RunJar.java:236)
> NestedThrowablesStackTrace: java.sql.BatchUpdateException: Batch entry 0
> INSERT INTO "DBS"
> ("DB_ID","CATALOG_NAME","DESC","DB_LOCATION_URI","NAME","OWNER_NAME","OWNER_TYPE")
> VALUES (256,'hive','Default Hive
> database','oci://path/','default','public','ROLE') was aborted. Call
> getNextException to see the cause. at
> org.postgresql.jdbc.BatchResultHandler.handleError(BatchResultHandler.java:133)
> at
> org.postgresql.core.v3.QueryExecutorImpl$1.handleError(QueryExecutorImpl.java:419)
> at
> org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2004)
> at
> org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:360)
> at org.postgresql.jdbc.PgStatement.executeBatch(PgStatement.java:1019) at
> com.zaxxer.hikari.pool.ProxyStatement.executeBatch(ProxyStatement.java:125)
> at
> com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeBatch(HikariProxyPreparedStatement.java)
> at
> org.datanucleus.store.rdbms.ParamLoggingPreparedStatement.executeBatch(ParamLoggingPreparedStatement.java:366)
> at
> org.datanucleus.store.rdbms.SQLController.processConnectionStatement(SQLController.java:676)
> at
> org.datanucleus.store.rdbms.SQLController.processStatementsForConnection(SQLController.java:644)
> at
> org.datanucleus.store.rdbms.SQLController$1.transactionFlushed(SQLController.java:731)
> at
> org.datanucleus.store.connection.AbstractManagedConnection.transactionFlushed(AbstractManagedConnection.java:89)
> at
> org.datanucleus.store.connection.ConnectionManagerImpl$2.transactionFlushed(ConnectionManagerImpl.java:450)
> at org.datanucleus.TransactionImpl.flush(TransactionImpl.java:210) at
> org.datanucleus.TransactionImpl.commit(TransactionImpl.java:274) at
> org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:107) at
> org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:766)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:954)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
> at com.sun.proxy.$Proxy36.createDatabase(Unknown Source) at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:753)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:771)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:80)
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8678)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:169)
> at
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:94)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at
> org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:95)
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
> at
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:4306)
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4374) at
> org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4354) at
> org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:4610) at
> org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:291) at
> org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:274)
> at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:442) at
> org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:382) at
> org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:362) at
> org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:331) at
> org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry.init(HiveMaterializedViewsRegistry.java:133)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:755) at
> org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> org.apache.hadoop.util.RunJar.run(RunJar.java:323) at
> org.apache.hadoop.util.RunJar.main(RunJar.java:236) 2023-08-05 09:30:21,602
> WARN metastore.ObjectStore: Failed to get database hive.default, returning
> NoSuchObjectException 2023-08-05 09:30:21,605 ERROR
> metastore.RetryingHMSHandler: Retrying HMSHandler after 2000 ms (attempt 1 of
> 10) with error: javax.jdo.JDODataStoreException: Exception thrown flushing
> changes to datastore at
> org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
> at org.datanucleus.api.jdo.JDOTransaction.commit(JDOTransaction.java:171) at
> org.apache.hadoop.hive.metastore.ObjectStore.commitTransaction(ObjectStore.java:766)
> at
> org.apache.hadoop.hive.metastore.ObjectStore.createDatabase(ObjectStore.java:954)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:97)
> at com.sun.proxy.$Proxy36.createDatabase(Unknown Source) at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB_core(HiveMetaStore.java:753)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:775)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:540)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:80)
> at
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:93)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:8678)
> at
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.(HiveMetaStoreClient.java:169)
> at
> org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient.(SessionHiveMetaStoreClient.java:94)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
> at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at
> org.apache.hadoop.hive.metastore.utils.JavaUtils.newInstance(JavaUtils.java:84)
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.(RetryingMetaStoreClient.java:95)
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:148)
> at
> org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:119)
> at
> org.apache.hadoop.hive.ql.metadata.Hive.createMetaStoreClient(Hive.java:4306)
> at org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4374) at
> org.apache.hadoop.hive.ql.metadata.Hive.getMSC(Hive.java:4354) at
> org.apache.hadoop.hive.ql.metadata.Hive.getAllFunctions(Hive.java:4610) at
> org.apache.hadoop.hive.ql.metadata.Hive.reloadFunctions(Hive.java:291) at
> org.apache.hadoop.hive.ql.metadata.Hive.registerAllFunctionsOnce(Hive.java:274)
> at org.apache.hadoop.hive.ql.metadata.Hive.(Hive.java:442) at
> org.apache.hadoop.hive.ql.metadata.Hive.create(Hive.java:382) at
> org.apache.hadoop.hive.ql.metadata.Hive.getInternal(Hive.java:362) at
> org.apache.hadoop.hive.ql.metadata.Hive.get(Hive.java:331) at
> org.apache.hadoop.hive.ql.metadata.HiveMaterializedViewsRegistry.init(HiveMaterializedViewsRegistry.java:133)
> at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:755) at
> org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683) at
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498) at
> org.apache.hadoop.util.RunJar.run(RunJar.java:323) at
> org.apache.hadoop.util.RunJar.main(RunJar.java:236)
>
> Hive -site:
> <?xml version="1.0" encoding="UTF-8" standalone="no"?>
> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
> <configuration>
> <property>
> <name>hive.exec.scratchdir</name>
> <value>/dd/tmp/</value>
> </property>
> <property>
> <name>datanucleus.autoCreateSchema</name>
> <value>false</value>
> </property>
>
> <property>
> <name>datanucleus.fixedDatastore</name>
> <value>true</value>
> </property>
>
> <property>
> <name>datanucleus.autoStartMechanism</name>
> <value>SchemaTable</value>
> </property>
>
> <property>
> <name>hive.metastore.schema.verification</name>
> <value>false</value>
> <description></description>
> </property>
>
> <property>
> <name>spark.sql.warehouse.dir</name>
> <value>oci://cx/</value>
> </property>
>
> <property>
> <name>hive.metastore.warehouse.dir</name>
> <value>oci://cx/</value>
> </property>
> <property>
> <name>javax.jdo.option.ConnectionURL</name>
> <value>jdbc:postgresql://0:5432/hivemeta</value>
> </property>
> <property>
> <name>javax.jdo.option.ConnectionDriverName</name>
> <value>org.postgresql.Driver</value>
> </property>
> <property>
> <name>javax.jdo.option.ConnectionUserName</name>
> <value>cccc</value>
> </property>
> <property>
> <name>javax.jdo.option.ConnectionPassword</name>
> <value>cc@3</value>
> </property>
>
> <property>
> <name>hive.metastore.port</name>
> <value>9083</value>
> </property>
> <property>
> <name>hive.server2.thrift.port</name>
> <value>10000</value>
> </property>
>
> <property>
> <name>mapreduce.input.fileinputformat.input.dir.recursive</name>
> <value>true</value>
> </property>
>
> <property>
> <name>hive.server2.use.SSL</name>
> <value>false</value>
> </property>
>
>
> </configuration>
>
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)