simuhunluo commented on issue #2168:
URL: https://github.com/apache/iceberg/issues/2168#issuecomment-769627622
@bkahloon ok, be glad to share.
In fact , I am trying to use hive metastore service but not aws glue table.
+ After creating catalog with flink sql client(using minio as s3 service poc
verification),
```
CREATE CATALOG hive_catalog with(
'type'='iceberg',
'catalog-type'='hive',
'uri'='thrift://localhost:9083',
'warehouse'='s3://mybucket/'
);
```
+ When try to create the database, some error occurred
```
create database hive_catalog.mydb;
```
+ error content:
```
Caused by: org.apache.hadoop.hive.metastore.api.MetaException: Got
exception: org.apache.hadoop.fs.s3.S3Exception
org.jets3t.service.S3ServiceException: S3 Error Message.
-- ResponseCode: 403, ResponseStatus: Forbidden, XML Error Message:
<?xml version="1.0"
encoding="UTF-8"?><Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access
Key Id you provided does not exist in our
records.</Message><AWSAccessKeyId>minioadmin</AWSAccessKeyId><RequestId>707081D8D8FAE888</RequestId><HostId>h0zIBYj2MJDZ7H9uWtFXwkLyp8HWUk3F7mAr8DfrTym4HyKBFuJAqpMb3hOcEg3F3iOZkb0HBug=</HostId></Error>
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_database_result$create_database_resultStandardScheme.read(ThriftHiveMetastore.java:39343)
~[?:?]
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_database_result$create_database_resultStandardScheme.read(ThriftHiveMetastore.java:39311)
~[?:?]
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$create_database_result.read(ThriftHiveMetastore.java:39245)
~[?:?]
at
org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) ~[?:?]
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_create_database(ThriftHiveMetastore.java:1106)
~[?:?]
at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.create_database(ThriftHiveMetastore.java:1093)
~[?:?]
at
org.apache.hadoop.hive.metastore.HiveMetaStoreClient.createDatabase(HiveMetaStoreClient.java:809)
~[?:?]
at
org.apache.iceberg.hive.HiveCatalog.lambda$createNamespace$7(HiveCatalog.java:302)
~[?:?]
at org.apache.iceberg.hive.ClientPool.run(ClientPool.java:54) ~[?:?]
at
org.apache.iceberg.hive.HiveCatalog.createNamespace(HiveCatalog.java:301) ~[?:?]
at
org.apache.iceberg.flink.FlinkCatalog.createDatabase(FlinkCatalog.java:200)
~[?:?]
at
org.apache.iceberg.flink.FlinkCatalog.createDatabase(FlinkCatalog.java:193)
~[?:?]
at
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeOperation(TableEnvironmentImpl.java:968)
~[flink-table_2.12-1.11.2.jar:1.11.2]
at
org.apache.flink.table.api.internal.TableEnvironmentImpl.executeSql(TableEnvironmentImpl.java:690)
~[flink-table_2.12-1.11.2.jar:1.11.2]
at
org.apache.flink.table.client.gateway.local.LocalExecutor.lambda$executeSql$7(LocalExecutor.java:360)
~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
at
org.apache.flink.table.client.gateway.local.ExecutionContext.wrapClassLoader(ExecutionContext.java:255)
~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
at
org.apache.flink.table.client.gateway.local.LocalExecutor.executeSql(LocalExecutor.java:360)
~[flink-sql-client_2.11-1.11.2.jar:1.11.2]
... 8 more
```
It is easy to understand why this error occurs. `minioadmin` is my local
minio servicekey, It does not exist on the aws s3 sever.
But what I am puzzled is, the caller of s3 api should be iceberg, not hive
metastore.
Anyone has any ideas?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]