[
https://issues.apache.org/jira/browse/HADOOP-4679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12652529#action_12652529
]
Hairong Kuang commented on HADOOP-4679:
---------------------------------------
I do not think it is necessary to check read-only disk for both block flle &
meta data file. Checking block file is good enough. I will update the javadoc
for shutdown.
> Datanode prints tons of log messages: Waiting for threadgroup to exit, active
> theads is XX
> ------------------------------------------------------------------------------------------
>
> Key: HADOOP-4679
> URL: https://issues.apache.org/jira/browse/HADOOP-4679
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Attachments: diskError.patch, diskError1.patch, diskError2.patch
>
>
> When a data receiver thread sees a disk error, it immediately calls shutdown
> to shutdown DataNode. But the shutdown method does not return before all data
> receiver threads exit, which will never happen. Therefore the DataNode gets
> into a dead/live lock state, emitting tons of log messages: Waiting for
> threadgroup to exit, active threads is XX.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.