[
https://issues.apache.org/jira/browse/HADOOP-6988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12919181#action_12919181
]
Devaraj Das commented on HADOOP-6988:
-------------------------------------
Although it is true that HADOOP_TOKEN_FILE_LOCATION can be used to make normal
hdfs commands work, the intent for having this was to support security for
Map/Reduce tasks, and, hadoop streaming apps that internally invoke
command-line hdfs operations (as Owen had pointed out earlier). If you want to
pass multiple tokens during job submission, the preferred approach would be to
write the tokens into a file (using the Credentials class's utilities), and
then point mapreduce.job.credentials.binary to that file.
Thinking about it, wouldn't the option of defining mapreduce.job.hdfs-servers
in the job configuration work for you. The JobClient will automatically get
delegation tokens from those namenodes and all tasks of the job can use those
tokens..
> Add support for reading multiple hadoop delegation token files
> --------------------------------------------------------------
>
> Key: HADOOP-6988
> URL: https://issues.apache.org/jira/browse/HADOOP-6988
> Project: Hadoop Common
> Issue Type: Improvement
> Components: security
> Affects Versions: 0.22.0
> Reporter: Aaron T. Myers
> Assignee: Aaron T. Myers
> Attachments: hadoop-6988.0.txt, hadoop-6988.1.txt
>
>
> It would be nice if there were a way to specify multiple delegation token
> files via the HADOOP_TOKEN_FILE_LOCATION environment variable and the
> "mapreduce.job.credentials.binary" configuration value. I suggest a
> colon-separated list of paths, each of which is read as a separate delegation
> token file.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.