[
https://issues.apache.org/jira/browse/HADOOP-5805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12713840#action_12713840
]
Hadoop QA commented on HADOOP-5805:
-----------------------------------
-1 overall. Here are the results of testing the latest attachment
http://issues.apache.org/jira/secure/attachment/12409019/HADOOP-5805-2.patch
against trunk revision 779338.
+1 @author. The patch does not contain any @author tags.
+1 tests included. The patch appears to include 4 new or modified tests.
+1 javadoc. The javadoc tool did not generate any warning messages.
+1 javac. The applied patch does not increase the total number of javac
compiler warnings.
+1 findbugs. The patch does not introduce any new Findbugs warnings.
+1 Eclipse classpath. The patch retains Eclipse classpath integrity.
+1 release audit. The applied patch does not increase the total number of
release audit warnings.
+1 core tests. The patch passed core unit tests.
-1 contrib tests. The patch failed contrib unit tests.
Test results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/415/testReport/
Findbugs warnings:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/415/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/415/artifact/trunk/build/test/checkstyle-errors.html
Console output:
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/415/console
This message is automatically generated.
> problem using top level s3 buckets as input/output directories
> --------------------------------------------------------------
>
> Key: HADOOP-5805
> URL: https://issues.apache.org/jira/browse/HADOOP-5805
> Project: Hadoop Core
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 0.18.3
> Environment: ec2, cloudera AMI, 20 nodes
> Reporter: Arun Jacob
> Assignee: Ian Nowland
> Fix For: 0.21.0
>
> Attachments: HADOOP-5805-0.patch, HADOOP-5805-1.patch,
> HADOOP-5805-2.patch
>
>
> When I specify top level s3 buckets as input or output directories, I get the
> following exception.
> hadoop jar subject-map-reduce.jar s3n://infocloud-input s3n://infocloud-output
> java.lang.IllegalArgumentException: Path must be absolute:
> s3n://infocloud-output
> at
> org.apache.hadoop.fs.s3native.NativeS3FileSystem.pathToKey(NativeS3FileSystem.java:246)
> at
> org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:319)
> at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:667)
> at
> org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:109)
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:738)
> at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1026)
> at
> com.evri.infocloud.prototype.subjectmapreduce.SubjectMRDriver.run(SubjectMRDriver.java:63)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at
> com.evri.infocloud.prototype.subjectmapreduce.SubjectMRDriver.main(SubjectMRDriver.java:25)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
> at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
> at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)
> The workaround is to specify input/output buckets with sub-directories:
>
> hadoop jar subject-map-reduce.jar s3n://infocloud-input/input-subdir
> s3n://infocloud-output/output-subdir
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.