[ https://issues.apache.org/jira/browse/HADOOP-5805?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12711910#action_12711910 ]
Hadoop QA commented on HADOOP-5805: ----------------------------------- -1 overall. Here are the results of testing the latest attachment http://issues.apache.org/jira/secure/attachment/12408752/HADOOP-5805-1.patch against trunk revision 777330. +1 @author. The patch does not contain any @author tags. +1 tests included. The patch appears to include 4 new or modified tests. -1 patch. The patch command could not apply the patch. Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/375/console This message is automatically generated. > problem using top level s3 buckets as input/output directories > -------------------------------------------------------------- > > Key: HADOOP-5805 > URL: https://issues.apache.org/jira/browse/HADOOP-5805 > Project: Hadoop Core > Issue Type: Bug > Components: fs/s3 > Affects Versions: 0.18.3 > Environment: ec2, cloudera AMI, 20 nodes > Reporter: Arun Jacob > Assignee: Ian Nowland > Fix For: 0.21.0 > > Attachments: HADOOP-5805-0.patch, HADOOP-5805-1.patch > > > When I specify top level s3 buckets as input or output directories, I get the > following exception. > hadoop jar subject-map-reduce.jar s3n://infocloud-input s3n://infocloud-output > java.lang.IllegalArgumentException: Path must be absolute: > s3n://infocloud-output > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem.pathToKey(NativeS3FileSystem.java:246) > at > org.apache.hadoop.fs.s3native.NativeS3FileSystem.getFileStatus(NativeS3FileSystem.java:319) > at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:667) > at > org.apache.hadoop.mapred.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:109) > at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:738) > at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1026) > at > com.evri.infocloud.prototype.subjectmapreduce.SubjectMRDriver.run(SubjectMRDriver.java:63) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > at > com.evri.infocloud.prototype.subjectmapreduce.SubjectMRDriver.main(SubjectMRDriver.java:25) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.util.RunJar.main(RunJar.java:155) > at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79) > at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68) > The workaround is to specify input/output buckets with sub-directories: > > hadoop jar subject-map-reduce.jar s3n://infocloud-input/input-subdir > s3n://infocloud-output/output-subdir -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.