[
https://issues.apache.org/jira/browse/HBASE-15709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15597624#comment-15597624
]
Duo Zhang commented on HBASE-15709:
-----------------------------------
Oh I've checked the code of HDFS, there is only a minBlockSize limitation in
FSNamesystem, no maxBlockSize, and seems the BlockReceiver does not check if
the received data exceeded the block size limit either(actually it does not
know the expected block size...)
So it is possible to write a very large block to HDFS, at least, it works for
the current trunk version of HDFS. Will write a UT later to confirm it. And
please stop the 'write a block larger than the specific block size' feature for
HDFS in the future, [~stack](I think you are a PMC member of hadoop?). At
least, before we implement the AsyncFSOutput in HDFS...
Thanks.
> Handle large edits for asynchronous WAL
> ---------------------------------------
>
> Key: HBASE-15709
> URL: https://issues.apache.org/jira/browse/HBASE-15709
> Project: HBase
> Issue Type: Sub-task
> Components: io, wal
> Reporter: Duo Zhang
> Priority: Critical
>
> First, FanOutOneBlockAsyncDFSOutput can not work if the buffered data is
> larger than PacketReceiver.MAX_PACKET_SIZE(16MB).
> Second, since we only allow one block here, we need to make sure we do not
> exceed the block size after writing a large chunk of data.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)