[
https://issues.apache.org/jira/browse/HADOOP-2346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12564557#action_12564557
]
Raghu Angadi commented on HADOOP-2346:
--------------------------------------
> I am not sure same condition still stalls the write pipeline in trunk
Looks like trunk would be affected in the same way.
What should the write timeouts be? (my recommendation in the braces) :
# Read : when datanode is serving data to client (10 min)
# Write : When client is writing data to DFS. Unlike read, this timeout is more
'disruptive' and thus needs to be more conservative. (10 min).
> DataNode should have timeout on socket writes.
> ----------------------------------------------
>
> Key: HADOOP-2346
> URL: https://issues.apache.org/jira/browse/HADOOP-2346
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.15.1
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Fix For: 0.16.1
>
> Attachments: HADOOP-2346.patch, HADOOP-2346.patch
>
>
> If a client opens a file and stops reading in the middle, DataNode thread
> writing the data could be stuck forever. For DataNode sockets we set read
> timeout but not write timeout. I think we should add a write(data, timeout)
> method in IOUtils that assumes it the underlying FileChannel is non-blocking.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.