[
https://issues.apache.org/jira/browse/HDFS-1479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12925530#action_12925530
]
Thanh Do commented on HDFS-1479:
--------------------------------
Can you give the detail scenario?
> Massive file deletion causes some timeouts in writers
> -----------------------------------------------------
>
> Key: HDFS-1479
> URL: https://issues.apache.org/jira/browse/HDFS-1479
> Project: Hadoop HDFS
> Issue Type: Improvement
> Affects Versions: 0.20.2
> Reporter: Zheng Shao
> Assignee: Zheng Shao
> Priority: Minor
>
> When we do a massive deletion of files, we saw some timeouts in writers who's
> writing to HDFS. This does not happen to all DataNodes, but it's happening
> regularly enough that we would like to fix it.
> {code}
> yyy.xxx.com: 10/10/25 00:55:32 WARN hdfs.DFSClient: DFSOutputStream
> ResponseProcessor exception for block
> blk_-5459995953259765112_37619608java.net.SocketTimeoutException: 69000
> millis timeout while waiting for channel to be ready for read. ch :
> java.nio.channels.SocketChannel[connected local=/10.10.10.10:56319
> remote=/10.10.10.10:50010]
> {code}
> This is caused by the default setting of AsyncDiskService, which starts 4
> threads per volume to delete files.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.