[ 
https://issues.apache.org/jira/browse/SPARK-19739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16720650#comment-16720650
 ] 

Imran Rashid edited comment on SPARK-19739 at 12/13/18 10:18 PM:
-----------------------------------------------------------------

[~ste...@apache.org] I didn't realize when using this at first that I also 
needed to add the conf {{--conf 
"spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider"}}
 to have {{AWS_SESSION_TOKEN}} take any effect.  You don't get any useful error 
msg if you don't add that credentials provided -- just access forbidden.  Do 
you think its useful to do that automatically as well when 
{{AWS_SESSION_TOKEN}} is set?


was (Author: irashid):
[~ste...@apache.org] I didn't realize when using this at first that I also 
needed to add the conf {{--conf 
"spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider"}}
 to have {{AWS_SESSION_TOKEN}} take any effect.  You don't get any useful error 
msg when that happens -- just access forbidden.  Do you think its useful to do 
that automatically as well when {{AWS_SESSION_TOKEN}} is set?

> SparkHadoopUtil.appendS3AndSparkHadoopConfigurations to propagate full set of 
> AWS env vars
> ------------------------------------------------------------------------------------------
>
>                 Key: SPARK-19739
>                 URL: https://issues.apache.org/jira/browse/SPARK-19739
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.1.0
>            Reporter: Steve Loughran
>            Assignee: Genmao Yu
>            Priority: Minor
>             Fix For: 2.2.0
>
>
> {{SparkHadoopUtil.appendS3AndSparkHadoopConfigurations()}} propagates the AWS 
> user and secret key to s3n and s3a config options, so getting secrets from 
> the user to the cluster, if set.
> AWS also supports session authentication (env var {{AWS_SESSION_TOKEN}}) and 
> region endpoints {{AWS_DEFAULT_REGION}}, the latter being critical if you 
> want to address V4-auth-only endpoints like frankfurt and Seol. 
> These env vars should be picked up and passed down to S3a too. 4+ lines of 
> code, though impossible to test unless the existing code is refactored to 
> take the env var map[String, String], so allowing a test suite to set the 
> values in itds own map.
> side issue: what if only half the env vars are set and users are trying to 
> understand why auth is failing? It may be good to build up a string 
> identifying which env vars had their value propagate, and log that @ debug, 
> while not logging the values, obviously.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to