[
https://issues.apache.org/jira/browse/HADOOP-10326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13897423#comment-13897423
]
Hadoop QA commented on HADOOP-10326:
------------------------------------
{color:green}+1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12627825/0001-HADOOP-10326.-s3-s3n-does-not-support-tokens.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 2 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. There were no new javadoc warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in
hadoop-common-project/hadoop-common.
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HADOOP-Build/3554//testReport/
Console output:
https://builds.apache.org/job/PreCommit-HADOOP-Build/3554//console
This message is automatically generated.
> M/R jobs can not access S3 if Kerberos is enabled
> -------------------------------------------------
>
> Key: HADOOP-10326
> URL: https://issues.apache.org/jira/browse/HADOOP-10326
> Project: Hadoop Common
> Issue Type: Bug
> Components: security
> Affects Versions: 2.2.0
> Environment: hadoop-1.0.0;MIT kerberos;java 1.6.0_26
> CDH4.3.0(hadoop 2.0.0-alpha);MIT kerberos;java 1.6.0_26
> Reporter: Manuel DE FERRAN
> Labels: s3
> Attachments: 0001-HADOOP-10326.-s3-s3n-does-not-support-tokens.patch
>
>
> With Kerberos enabled, any job that is taking as input or output s3 files
> fails.
> It can be easily reproduced with wordcount shipped in hadoop-examples.jar and
> a public S3 file:
> {code}
> /opt/hadoop/bin/hadoop --config /opt/hadoop/conf/ jar
> /opt/hadoop/hadoop-examples-1.0.0.jar wordcount s3n://ubikodpublic/test out01
> {code}
> returns:
> {code}
> 12/08/10 12:40:19 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN token
> 192 for hadoop on 10.85.151.233:9000
> 12/08/10 12:40:19 INFO security.TokenCache: Got dt for
> hdfs://aws04.machine.com:9000/mapred/staging/hadoop/.staging/job_201208101229_0004;uri=10.85.151.233:9000;t.service=10.85.151.233:9000
> 12/08/10 12:40:19 INFO mapred.JobClient: Cleaning up the staging area
> hdfs://aws04.machine.com:9000/mapred/staging/hadoop/.staging/job_201208101229_0004
> java.lang.IllegalArgumentException: java.net.UnknownHostException:
> ubikodpublic
> at
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:293)
> at
> org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:317)
> at
> org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:189)
> at
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:92)
> at
> org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:79)
> at
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.listStatus(FileInputFormat.java:197)
> at
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:252)
> <SNIP>
> {code}
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)