Todd Lipcon created HDFS-4305:
---------------------------------
Summary: Add a configurable limit on number of blocks per file,
and min block size
Key: HDFS-4305
URL: https://issues.apache.org/jira/browse/HDFS-4305
Project: Hadoop HDFS
Issue Type: Bug
Components: namenode
Affects Versions: 2.0.2-alpha, 1.0.4, 3.0.0
Reporter: Todd Lipcon
Priority: Minor
We recently had an issue where a user set the block size very very low and
managed to create a single file with hundreds of thousands of blocks. This
caused problems with the edit log since the OP_ADD op was so large (HDFS-4304).
I imagine it could also cause efficiency issues in the NN. To prevent users
from making such mistakes, we should:
- introduce a configurable minimum block size, below which requests are rejected
- introduce a configurable maximum number of blocks per file, above which
requests to add another block are rejected (with a suitably high default as to
not prevent legitimate large files)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira