Andrew Wang commented on HDFS-5517:

I'm also updating the target version, since strictly speaking we shouldn't 
commit incompat changes to branch-2.

> Lower the default maximum number of blocks per file
> ---------------------------------------------------
>                 Key: HDFS-5517
>                 URL: https://issues.apache.org/jira/browse/HDFS-5517
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.2.0
>            Reporter: Aaron T. Myers
>            Assignee: Aaron T. Myers
>              Labels: BB2015-05-TBR
>         Attachments: HDFS-5517.patch
> We introduced the maximum number of blocks per file in HDFS-4305, but we set 
> the default to 1MM. In practice this limit is so high as to never be hit, 
> whereas we know that an individual file with 10s of thousands of blocks can 
> cause problems. We should lower the default value, in my opinion to 10k.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to