[
https://issues.apache.org/jira/browse/HADOOP-15351?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16422580#comment-16422580
]
Steve Loughran commented on HADOOP-15351:
-----------------------------------------
what is your proposed strategy for
* anyone who sets their secrets in their client side spark job so that they are
dynamically uploaded with the job submission
* spark itself whose job submission will pick up any local AWS id, secret and
session secrets and save them as the hadoop properties.
These are job submissions where the secrets are passed in (SPARK-19739), and as
they let people put their own secrets into a shared cluster, I'm not sure how
much of a security risk they'd be.
To replace them you need to move to: save env vars to a JCEKs file, upload that
to cluster storage & then refer to it. Which for s3a, at the very least, means
you'll need to be using Hadoop 2.8+
Propagation of secrets with job submission is something which needs to be
looked at. I've looked having delegation token support for this (HADOOP-14556),
but it's unfinished, and I now think I'd like to make that more of a plugin
point (so you can choose to use Assumed Role session tokens, ...). I think
we'll need all the object stores to move to that before we can start to tell
people off for using secrets inline
> Deprecate all existing password fields in hadoop configuration
> --------------------------------------------------------------
>
> Key: HADOOP-15351
> URL: https://issues.apache.org/jira/browse/HADOOP-15351
> Project: Hadoop Common
> Issue Type: Improvement
> Reporter: Wei-Chiu Chuang
> Priority: Major
>
> In HADOOP-15325 [~shv] suggests we should mark all password fields in
> configuration file deprecated.
> Raise this Jira to track this work in HADOOP side.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]