[ 
https://issues.apache.org/jira/browse/HADOOP-16452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16895487#comment-16895487
 ] 

Wei-Chiu Chuang commented on HADOOP-16452:
------------------------------------------

LGTM. Let's get this in and also break FBRs into multiple messages 
(HDFS-11313/HDFS-14657)

> Increase ipc.maximum.data.length default from 64MB to 128MB
> -----------------------------------------------------------
>
>                 Key: HADOOP-16452
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16452
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: ipc
>    Affects Versions: 2.6.0
>            Reporter: Wei-Chiu Chuang
>            Assignee: Siyao Meng
>            Priority: Major
>         Attachments: HADOOP-16452.001.patch, HADOOP-16452.002.patch
>
>
> Reason for bumping the default:
> Denser DataNodes are common. It is not uncommon to find a DataNode with > 7 
> million blocks these days.
> With such a high number of blocks, the block report message can exceed the 
> 64mb limit (defined by ipc.maximum.data.length). The block reports are 
> rejected, causing missing blocks in HDFS. We had to double this configuration 
> value in order to work around the issue.
> We are seeing an increasing number of these cases. I think it's time to 
> revisit some of these default values as the hardware evolves.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to