[
https://issues.apache.org/jira/browse/HADOOP-16452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16896446#comment-16896446
]
Hudson commented on HADOOP-16452:
---------------------------------
FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #17009 (See
[https://builds.apache.org/job/Hadoop-trunk-Commit/17009/])
HADOOP-16452. Increase ipc.maximum.data.length default from 64MB to (weichiu:
rev c75f16db79974ad03afbc366709fe2356d0a633e)
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* (edit)
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> Increase ipc.maximum.data.length default from 64MB to 128MB
> -----------------------------------------------------------
>
> Key: HADOOP-16452
> URL: https://issues.apache.org/jira/browse/HADOOP-16452
> Project: Hadoop Common
> Issue Type: Improvement
> Components: ipc
> Affects Versions: 2.6.0
> Reporter: Wei-Chiu Chuang
> Assignee: Siyao Meng
> Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16452.001.patch, HADOOP-16452.002.patch
>
>
> Reason for bumping the default:
> Denser DataNodes are common. It is not uncommon to find a DataNode with > 7
> million blocks these days.
> With such a high number of blocks, the block report message can exceed the
> 64mb limit (defined by ipc.maximum.data.length). The block reports are
> rejected, causing missing blocks in HDFS. We had to double this configuration
> value in order to work around the issue.
> We are seeing an increasing number of these cases. I think it's time to
> revisit some of these default values as the hardware evolves.
--
This message was sent by Atlassian JIRA
(v7.6.14#76016)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]