[ 
https://issues.apache.org/jira/browse/MAPREDUCE-5735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13882728#comment-13882728
 ] 

Steve Loughran commented on MAPREDUCE-5735:
-------------------------------------------

# please include the release versions in the "affects version" field.
# if it doesn't occur in Hadoop 2.2 then can you upgrade and see if it does.
# S3 isn't a filesystem, it's a blob store, and some aspects of MR operation 
just don't work as they would against HDFS. Why not try writing to HDFS and 
then copying out the results?
# that said, if the problem does remain in S3 on Hadoop 2.2+, then this could 
be a bug -in which case a unit test to replicate it would be a good start to 
fixing it

> MultipleOutputs of hadoop not working properly with s3 filesyatem
> -----------------------------------------------------------------
>
>                 Key: MAPREDUCE-5735
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-5735
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>            Reporter: sunil ranjan khuntia
>            Priority: Minor
>
> I have written a mapreduce job and used 
> MultipleOutputs(org.apache.hadoop.mapreduce.lib.output.MultipleOutputs) calss 
> to put the resultant file in a specific user defined directory path(instead 
> of getting the o/p file part-r-00000 i want to have 
> dir1/dir2/dir3/d-r-00000). This works fine for hdfs.
> But when I run the same mapreduce job with s3 file sytem the user defined 
> directory structure is not created in s3. Is it that MultipleOutputs is not 
> suported in S3? if so, any alternate way by which I can customize my 
> mapreduce o/p file directory path in s3.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Reply via email to