[ 
https://issues.apache.org/jira/browse/HADOOP-16452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17311087#comment-17311087
 ] 

Wei-Chiu Chuang commented on HADOOP-16452:
------------------------------------------

For future reference: the Hadoop IPC sub component is used by other projects 
(Ratis, Tez) where they have different message size characteristics. When we 
bumped the default size it was meant for HDFS, but it may make sense for other 
projects to adopt a larger default message size too.

> Increase ipc.maximum.data.length default from 64MB to 128MB
> -----------------------------------------------------------
>
>                 Key: HADOOP-16452
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16452
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: ipc
>    Affects Versions: 2.6.0
>            Reporter: Wei-Chiu Chuang
>            Assignee: Siyao Meng
>            Priority: Major
>             Fix For: 3.3.0
>
>         Attachments: HADOOP-16452.001.patch, HADOOP-16452.002.patch
>
>
> Reason for bumping the default:
> Denser DataNodes are common. It is not uncommon to find a DataNode with > 7 
> million blocks these days.
> With such a high number of blocks, the block report message can exceed the 
> 64mb limit (defined by ipc.maximum.data.length). The block reports are 
> rejected, causing missing blocks in HDFS. We had to double this configuration 
> value in order to work around the issue.
> We are seeing an increasing number of these cases. I think it's time to 
> revisit some of these default values as the hardware evolves.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to