[ 
https://issues.apache.org/jira/browse/HADOOP-2346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raghu Angadi updated HADOOP-2346:
---------------------------------

    Attachment: HADOOP-2346.patch

Fixed findbugs warnings. Findbugs is smart enough to know that 
socket.getOutputStream() does not need to be closed..  it does not know 
hadoop.net.SocketOutputStream() is a true drop in replacement.  I think 
eventually we will write a 'FilterSocket' and override getOutputStream() and 
getInputStream().


> DataNode should have timeout on socket writes.
> ----------------------------------------------
>
>                 Key: HADOOP-2346
>                 URL: https://issues.apache.org/jira/browse/HADOOP-2346
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.15.1
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>             Fix For: 0.17.0
>
>         Attachments: HADOOP-2346.patch, HADOOP-2346.patch, HADOOP-2346.patch, 
> HADOOP-2346.patch, HADOOP-2346.patch, HADOOP-2346.patch, HADOOP-2346.patch, 
> HADOOP-2346.patch, HADOOP-2346.patch, HADOOP-2346.patch, HADOOP-2346.patch, 
> HADOOP-2346.patch, HADOOP-2346.patch, HADOOP-2346.patch, HADOOP-2346.patch
>
>
> If a client opens a file and stops reading in the middle, DataNode thread 
> writing the data could be stuck forever. For DataNode sockets we set read 
> timeout but not write timeout. I think we should add a write(data, timeout) 
> method in IOUtils that assumes it the underlying FileChannel is non-blocking.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to