[ 
https://issues.apache.org/jira/browse/HADOOP-5861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12715307#action_12715307
 ] 

Hadoop QA commented on HADOOP-5861:
-----------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12409550/hadoop-5861-v2.patch
  against trunk revision 780777.

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 8 new or modified tests.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 Eclipse classpath. The patch retains Eclipse classpath integrity.

    +1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

    +1 core tests.  The patch passed core unit tests.

    -1 contrib tests.  The patch failed contrib unit tests.

Test results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/447/testReport/
Findbugs warnings: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/447/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/447/artifact/trunk/build/test/checkstyle-errors.html
Console output: 
http://hudson.zones.apache.org/hudson/job/Hadoop-Patch-vesta.apache.org/447/console

This message is automatically generated.

> s3n files are not getting split by default 
> -------------------------------------------
>
>                 Key: HADOOP-5861
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5861
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 0.19.1
>         Environment: ec2
>            Reporter: Joydeep Sen Sarma
>            Assignee: Tom White
>         Attachments: hadoop-5861-v2.patch, hadoop-5861.patch
>
>
> running with stock ec2 scripts against hadoop-19 - i tried to run a job 
> against a directory with 4 text files - each about 2G in size. These were not 
> split (only 4 mappers were run).
> The reason seems to have two parts - primarily that S3N files report a block 
> size of 5G. This causes FileInputFormat.getSplits to fall back on goal size 
> (which is totalsize/conf.get("mapred.map.tasks")).Goal Size in this case was 
> 4G - hence the files were not split. This is not an issue with other file 
> systems since the block size reported is much smaller and the splits get 
> based on block size (not goal size).
> can we make the S3N files report a more reasonable block size?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to