[
https://issues.apache.org/jira/browse/HDFS-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Aaron T. Myers updated HDFS-4305:
---------------------------------
Release Note: This change introduces a maximum number of blocks per file,
by default one million, and a minimum block size, by default 1MB. These can
optionally be changed via the configuration settings
"dfs.namenode.fs-limits.max-blocks-per-file" and
"dfs.namenode.fs-limits.min-block-size", respectively.
Hadoop Flags: Incompatible change,Reviewed (was: Reviewed)
> Add a configurable limit on number of blocks per file, and min block size
> -------------------------------------------------------------------------
>
> Key: HDFS-4305
> URL: https://issues.apache.org/jira/browse/HDFS-4305
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 1.0.4, 2.0.4-alpha
> Reporter: Todd Lipcon
> Assignee: Andrew Wang
> Priority: Minor
> Fix For: 2.0.5-beta
>
> Attachments: hdfs-4305-1.patch, hdfs-4305-2.patch, hdfs-4305-3.patch
>
>
> We recently had an issue where a user set the block size very very low and
> managed to create a single file with hundreds of thousands of blocks. This
> caused problems with the edit log since the OP_ADD op was so large
> (HDFS-4304). I imagine it could also cause efficiency issues in the NN. To
> prevent users from making such mistakes, we should:
> - introduce a configurable minimum block size, below which requests are
> rejected
> - introduce a configurable maximum number of blocks per file, above which
> requests to add another block are rejected (with a suitably high default as
> to not prevent legitimate large files)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira