[
https://issues.apache.org/jira/browse/HDFS-5153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13872539#comment-13872539
]
Hadoop QA commented on HDFS-5153:
---------------------------------
{color:red}-1 overall{color}. Here are the results of testing the latest
attachment
http://issues.apache.org/jira/secure/attachment/12623206/HDFS-5153.01.patch
against trunk revision .
{color:red}-1 patch{color}. The patch command could not apply the patch.
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/5887//console
This message is automatically generated.
> Datanode should stagger block reports from individual storages
> --------------------------------------------------------------
>
> Key: HDFS-5153
> URL: https://issues.apache.org/jira/browse/HDFS-5153
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 3.0.0
> Reporter: Arpit Agarwal
> Attachments: HDFS-5153.01.patch
>
>
> When the number of blocks on the DataNode grows large we start running into a
> few issues:
> # Block reports take a long time to process on the NameNode. In testing we
> have seen that a block report with 6 Million blocks takes close to one second
> to process on the NameNode. The NameSystem write lock is held during this
> time.
> # We start hitting the default protobuf message limit of 64MB somewhere
> around 10 Million blocks. While we can increase the message size limit it
> already takes over 7 seconds to serialize/unserialize a block report of this
> size.
> HDFS-2832 has introduced the concept of a DataNode as a collection of
> storages i.e. the NameNode is aware of all the volumes (storage directories)
> attached to a given DataNode. This makes it easy to split block reports from
> the DN by sending one report per storage directory to mitigate the above
> problems.
--
This message was sent by Atlassian JIRA
(v6.1.5#6160)