[
https://issues.apache.org/jira/browse/SPARK-20153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15973541#comment-15973541
]
Steve Loughran commented on SPARK-20153:
----------------------------------------
[[email protected]] : thanks for discovering that. I don't know what Hadoop
version they are using, more specifically, "did they backport the S3a bucket
feature to EMR's hadoop fork". It's not in Hadoop 2.7.x, after all.
# I'd avoid mixing working with local data via s3 and s3a, just because I have
no idea what will happen.
# unless you can get a list from the AWS team as to what's in their s3a client,
you may not get the multiple bucket feature. If it does: go for it. (Easy test:
set an endpoint for a specific bucket you create in the frankfurt region, while
leaving the default == us-east/central. If you can read the data then the
endpoint property is being picked up).
> Support Multiple aws credentials in order to access multiple Hive on S3 table
> in spark application
> ---------------------------------------------------------------------------------------------------
>
> Key: SPARK-20153
> URL: https://issues.apache.org/jira/browse/SPARK-20153
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 2.0.1, 2.1.0
> Reporter: Franck Tago
> Priority: Minor
>
> I need to access multiple hive tables in my spark application where each hive
> table is
> 1- an external table with data sitting on S3
> 2- each table is own by a different AWS user so I need to provide different
> AWS credentials.
> I am familiar with setting the aws credentials in the hadoop configuration
> object but that does not really help me because I can only set one pair of
> (fs.s3a.awsAccessKeyId , fs.s3a.awsSecretAccessKey )
> From my research , there is no easy or elegant way to do this in spark .
> Why is that ?
> How do I address this use case?
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]