[
https://issues.apache.org/jira/browse/HDFS-611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12774062#action_12774062
]
Konstantin Shvachko commented on HDFS-611:
------------------------------------------
It is better to keep the whole implementation in one place rather than sread it
between classes. That makes it modifications easier.
When you decide to move AsyncDiskService to common you can move current
implementation of it and create a new class DataNodeAsyncDiskService, which
either extends AsyncDiskService or encapsulates it, whatever is better.
> Heartbeats times from Datanodes increase when there are plenty of blocks to
> delete
> ----------------------------------------------------------------------------------
>
> Key: HDFS-611
> URL: https://issues.apache.org/jira/browse/HDFS-611
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: data-node
> Affects Versions: 0.20.1, 0.21.0, 0.22.0
> Reporter: dhruba borthakur
> Assignee: Zheng Shao
> Fix For: 0.20.2, 0.21.0, 0.22.0
>
> Attachments: HDFS-611.branch-19.patch, HDFS-611.branch-19.v2.patch,
> HDFS-611.branch-20.patch, HDFS-611.branch-20.v2.patch, HDFS-611.trunk.patch,
> HDFS-611.trunk.v2.patch, HDFS-611.trunk.v3.patch, HDFS-611.trunk.v4.patch,
> HDFS-611.trunk.v5.patch
>
>
> I am seeing that when we delete a large directory that has plenty of blocks,
> the heartbeat times from datanodes increase significantly from the normal
> value of 3 seconds to as large as 50 seconds or so. The heartbeat thread in
> the Datanode deletes a bunch of blocks sequentially, this causes the
> heartbeat times to increase.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.