[
https://issues.apache.org/jira/browse/HIVE-16913?focusedWorklogId=804643&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-804643
]
ASF GitHub Bot logged work on HIVE-16913:
-----------------------------------------
Author: ASF GitHub Bot
Created on: 30/Aug/22 06:25
Start Date: 30/Aug/22 06:25
Worklog Time Spent: 10m
Work Description: zhangbutao commented on PR #3542:
URL: https://github.com/apache/hive/pull/3542#issuecomment-1231201583
> Can we a have a test (either unit test or qtest) for this? with a dummy
keys and printing it somewhere
@shameersss1 Thx for your advice. This PR seems difficult to test using
current test framework. Although HIVE-14373 has added integration tests for
hive on S3, it is mainly tested using global ak & sk.
However, this PR is aimed for dynamic ak & sk, and I need set the ak & sk
parameters using hive set command. Maybe i need sometime to explore how to test
this scenario...
Issue Time Tracking
-------------------
Worklog Id: (was: 804643)
Time Spent: 40m (was: 0.5h)
> Support per-session S3 credentials
> ----------------------------------
>
> Key: HIVE-16913
> URL: https://issues.apache.org/jira/browse/HIVE-16913
> Project: Hive
> Issue Type: Improvement
> Reporter: Vihang Karajgaonkar
> Assignee: Vihang Karajgaonkar
> Priority: Major
> Labels: pull-request-available
> Time Spent: 40m
> Remaining Estimate: 0h
>
> Currently, the credentials needed to support Hive-on-S3 (or any other
> cloud-storage) need to be to the hive-site.xml. Either using a hadoop
> credential provider or by adding the keys in the hive-site.xml in plain text
> (unsecure)
> This limits the usecase to using a single S3 key. If we configure per bucket
> s3 keys like described [here |
> http://hadoop.apache.org/docs/current/hadoop-aws/tools/hadoop-aws/index.html#Configurations_different_S3_buckets]
> it exposes the access to all the buckets to all the hive users.
> It is possible that there are different sets of users who would not like to
> share there buckets and still be able to process the data using Hive.
> Enabling session level credentials will help solve such use-cases. For
> example, currently this doesn't work
> {noformat}
> set fs.s3a.secret.key=my_secret_key;
> set fs.s3a.access.key=my_access.key;
> {noformat}
> Because metastore is unaware of the the keys. This doesn't work either
> {noformat}
> set fs.s3a.secret.key=my_secret_key;
> set fs.s3a.access.key=my_access.key;
> set metaconf:fs.s3a.secret.key=my_secret_key;
> set metaconf:fs.s3a.access.key=my_access_key;
> {noformat}
> This is because only a certain metastore configurations defined in
> {{HiveConf.MetaVars}} are allowed to be set by the user. If we enable the
> above approaches we could potentially allow multiple S3 credentials on a
> per-session level basis.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)