[
https://issues.apache.org/jira/browse/HDFS-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13874362#comment-13874362
]
Hadoop QA commented on HDFS-5153:
---------------------------------
{color:green}+1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12623541/HDFS-5153.04.patch
against trunk revision .
{color:green}+1 @author{color}. The patch does not contain any @author
tags.
{color:green}+1 tests included{color}. The patch appears to include 4 new
or modified test files.
{color:green}+1 javac{color}. The applied patch does not increase the
total number of javac compiler warnings.
{color:green}+1 javadoc{color}. The javadoc tool did not generate any
warning messages.
{color:green}+1 eclipse:eclipse{color}. The patch built with
eclipse:eclipse.
{color:green}+1 findbugs{color}. The patch does not introduce any new
Findbugs (version 1.3.9) warnings.
{color:green}+1 release audit{color}. The applied patch does not increase
the total number of release audit warnings.
{color:green}+1 core tests{color}. The patch passed unit tests in
hadoop-hdfs-project/hadoop-hdfs.
{color:green}+1 contrib tests{color}. The patch passed contrib unit tests.
Test results:
https://builds.apache.org/job/PreCommit-HDFS-Build/5906//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5906//console
This message is automatically generated.
> Datanode should send block reports for each storage in a separate message
> -------------------------------------------------------------------------
>
> Key: HDFS-5153
> URL: https://issues.apache.org/jira/browse/HDFS-5153
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 3.0.0
> Reporter: Arpit Agarwal
> Attachments: HDFS-5153.01.patch, HDFS-5153.03.patch,
> HDFS-5153.03b.patch, HDFS-5153.04.patch
>
>
> When the number of blocks on the DataNode grows large we start running into a
> few issues:
> # Block reports take a long time to process on the NameNode. In testing we
> have seen that a block report with 6 Million blocks takes close to one second
> to process on the NameNode. The NameSystem write lock is held during this
> time.
> # We start hitting the default protobuf message limit of 64MB somewhere
> around 10 Million blocks. While we can increase the message size limit it
> already takes over 7 seconds to serialize/unserialize a block report of this
> size.
> HDFS-2832 has introduced the concept of a DataNode as a collection of
> storages i.e. the NameNode is aware of all the volumes (storage directories)
> attached to a given DataNode. This makes it easy to split block reports from
> the DN by sending one report per storage directory to mitigate the above
> problems.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)