[
https://issues.apache.org/jira/browse/SPARK-19739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Sean Owen resolved SPARK-19739.
-------------------------------
Resolution: Fixed
Fix Version/s: 2.2.0
Issue resolved by pull request 17080
[https://github.com/apache/spark/pull/17080]
> SparkHadoopUtil.appendS3AndSparkHadoopConfigurations to propagate full set of
> AWS env vars
> ------------------------------------------------------------------------------------------
>
> Key: SPARK-19739
> URL: https://issues.apache.org/jira/browse/SPARK-19739
> Project: Spark
> Issue Type: Improvement
> Components: Spark Core
> Affects Versions: 2.1.0
> Reporter: Steve Loughran
> Priority: Minor
> Fix For: 2.2.0
>
>
> {{SparkHadoopUtil.appendS3AndSparkHadoopConfigurations()}} propagates the AWS
> user and secret key to s3n and s3a config options, so getting secrets from
> the user to the cluster, if set.
> AWS also supports session authentication (env var {{AWS_SESSION_TOKEN}}) and
> region endpoints {{AWS_DEFAULT_REGION}}, the latter being critical if you
> want to address V4-auth-only endpoints like frankfurt and Seol.
> These env vars should be picked up and passed down to S3a too. 4+ lines of
> code, though impossible to test unless the existing code is refactored to
> take the env var map[String, String], so allowing a test suite to set the
> values in itds own map.
> side issue: what if only half the env vars are set and users are trying to
> understand why auth is failing? It may be good to build up a string
> identifying which env vars had their value propagate, and log that @ debug,
> while not logging the values, obviously.
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]