[
https://issues.apache.org/jira/browse/HDFS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13182034#comment-13182034
]
Harsh J commented on HDFS-583:
------------------------------
We should cap the DFSClient as well, as it'd help save an RPC call if
early-detected.
The remaining argument would be: What's the best default? 8g?
I'm also sorta -0 on this since we've not limited this before and if folks have
been writing really huge files with very large block sizes in their HDFS
already, they'd be upset at this behavior change.
> HDFS should enforce a max block size
> ------------------------------------
>
> Key: HDFS-583
> URL: https://issues.apache.org/jira/browse/HDFS-583
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: name-node
> Reporter: Hairong Kuang
>
> When DataNode creates a replica, it should enforce a max block size, so
> clients can't go crazy. One way of enforcing this is to make
> BlockWritesStreams to be filter steams that check the block size.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira