[
https://issues.apache.org/jira/browse/HDFS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14074690#comment-14074690
]
Colin Patrick McCabe commented on HDFS-583:
-------------------------------------------
Most places where we refer to block size uses "long". I'm not sure where we
are limiting this (it would be good to document this somehow, if it is indeed
going on.)
In general, enormous blocks haven't really been all that useful in the past,
since they make it harder for execution frameworks to divide up work in a
reasonable manner. I can sort of see why you might want a limit in theory, but
so far it hasn't really been a requested feature by anyone. With or without
giant blocks, evil clients can still fill up the DataNode, up to their
designated quota. Small blocks are probably more evil, but we limited those in
HDFS-4305 when we introduced {{dfs.namenode.fs-limits.min-block-size}}.
> HDFS should enforce a max block size
> ------------------------------------
>
> Key: HDFS-583
> URL: https://issues.apache.org/jira/browse/HDFS-583
> Project: Hadoop HDFS
> Issue Type: Improvement
> Components: namenode
> Reporter: Hairong Kuang
>
> When DataNode creates a replica, it should enforce a max block size, so
> clients can't go crazy. One way of enforcing this is to make
> BlockWritesStreams to be filter steams that check the block size.
--
This message was sent by Atlassian JIRA
(v6.2#6252)