[
https://issues.apache.org/jira/browse/HDFS-611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12753861#action_12753861
]
Doug Cutting commented on HDFS-611:
-----------------------------------
If it's i/o bound it could make sense to have one thread per volume. The
datanode knows the number of volumes and could use that instead of a config
option. But do you really think a single thread would get so far behind that
the queue of blocks to delete would exhaust the datanode's heap? Most
datanodes only have a few thousand blocks, don't they? And not using all of
the i/o bandwidth for block deletion might be a feature.
> Heartbeats times from Datanodes increase when there are plenty of blocks to
> delete
> ----------------------------------------------------------------------------------
>
> Key: HDFS-611
> URL: https://issues.apache.org/jira/browse/HDFS-611
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: data-node
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
>
> I am seeing that when we delete a large directory that has plenty of blocks,
> the heartbeat times from datanodes increase significantly from the normal
> value of 3 seconds to as large as 50 seconds or so. The heartbeat thread in
> the Datanode deletes a bunch of blocks sequentially, this causes the
> heartbeat times to increase.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.