[ 
https://issues.apache.org/jira/browse/HADOOP-16452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16894156#comment-16894156
 ] 

Wei-Chiu Chuang commented on HADOOP-16452:
------------------------------------------

[~smeng] please also update core-default.xml
{code}
<property>
  <name>ipc.maximum.data.length</name>
  <value>67108864</value>
  <description>This indicates the maximum IPC message length (bytes) that can be
    accepted by the server. Messages larger than this value are rejected by the
    immediately to avoid possible OOMs. This setting should rarely need to be
    changed.
  </description>
</property>
{code}

and also release note of this jira.

> Increase ipc.maximum.data.length default from 64MB to 128MB
> -----------------------------------------------------------
>
>                 Key: HADOOP-16452
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16452
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: ipc
>    Affects Versions: 2.6.0
>            Reporter: Wei-Chiu Chuang
>            Priority: Major
>         Attachments: HADOOP-16452.001.patch
>
>
> Reason for bumping the default:
> Denser DataNodes are common. It is not uncommon to find a DataNode with > 7 
> million blocks these days.
> With such a high number of blocks, the block report message can exceed the 
> 64mb limit (defined by ipc.maximum.data.length). The block reports are 
> rejected, causing missing blocks in HDFS. We had to double this configuration 
> value in order to work around the issue.
> We are seeing an increasing number of these cases. I think it's time to 
> revisit some of these default values as the hardware evolves.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to