[ 
https://issues.apache.org/jira/browse/HADOOP-16452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16891147#comment-16891147
 ] 

Anu Engineer commented on HADOOP-16452:
---------------------------------------

 what is the impact on block processing? we take some locks..so these kinds of 
block reports can be hard on NN? Is it possible to break these messages into 
multiple messages at least inside NN?

> Increase ipc.maximum.data.length default from 64MB to 128MB
> -----------------------------------------------------------
>
>                 Key: HADOOP-16452
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16452
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: ipc
>    Affects Versions: 2.6.0
>            Reporter: Wei-Chiu Chuang
>            Priority: Major
>
> Reason for bumping the default:
> Denser DataNodes are common. It is not uncommon to find a DataNode with > 7 
> million blocks these days.
> With such a high number of blocks, the block report message can exceed the 
> 64mb limit (defined by ipc.maximum.data.length). The block reports are 
> rejected, causing missing blocks in HDFS. We had to double this configuration 
> value in order to work around the issue.
> We are seeing an increasing number of these cases. I think it's time to 
> revisit some of these default values as the hardware evolves.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to