[ 
https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14607527#comment-14607527
 ] 

Hadoop QA commented on HDFS-5517:
---------------------------------

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12614116/HDFS-5517.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / d3797f9 |
| Console output | 
https://builds.apache.org/job/PreCommit-HDFS-Build/11542/console |


This message was automatically generated.

> Lower the default maximum number of blocks per file
> ---------------------------------------------------
>
>                 Key: HDFS-5517
>                 URL: https://issues.apache.org/jira/browse/HDFS-5517
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.2.0
>            Reporter: Aaron T. Myers
>            Assignee: Aaron T. Myers
>              Labels: BB2015-05-TBR
>         Attachments: HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set 
> the default to 1MM. In practice this limit is so high as to never be hit, 
> whereas we know that an individual file with 10s of thousands of blocks can 
> cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to