[
https://issues.apache.org/jira/browse/HDFS-4305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13645344#comment-13645344
]
Hudson commented on HDFS-4305:
------------------------------
Integrated in Hadoop-trunk-Commit #3699 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/3699/])
Move the CHANGES.txt entry for HDFS-4305 to the incompatible changes
section. (Revision 1477488)
Result = SUCCESS
atm : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1477488
Files :
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
> Add a configurable limit on number of blocks per file, and min block size
> -------------------------------------------------------------------------
>
> Key: HDFS-4305
> URL: https://issues.apache.org/jira/browse/HDFS-4305
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: namenode
> Affects Versions: 1.0.4, 2.0.4-alpha
> Reporter: Todd Lipcon
> Assignee: Andrew Wang
> Priority: Minor
> Fix For: 2.0.5-beta
>
> Attachments: hdfs-4305-1.patch, hdfs-4305-2.patch, hdfs-4305-3.patch
>
>
> We recently had an issue where a user set the block size very very low and
> managed to create a single file with hundreds of thousands of blocks. This
> caused problems with the edit log since the OP_ADD op was so large
> (HDFS-4304). I imagine it could also cause efficiency issues in the NN. To
> prevent users from making such mistakes, we should:
> - introduce a configurable minimum block size, below which requests are
> rejected
> - introduce a configurable maximum number of blocks per file, above which
> requests to add another block are rejected (with a suitably high default as
> to not prevent legitimate large files)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira