[
https://issues.apache.org/jira/browse/HADOOP-2346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Raghu Angadi updated HADOOP-2346:
---------------------------------
Attachment: HADOOP-2346.patch
Patch for trunk is attached. All of DataNode and DFSClient sockets have write
timeouts. The read timeout remains the same as before. The new streams are in
hadoop.net package (strictly, they work one any socket with a selectable
channel).
There are multiple public APIs. Doug, please take a look. The static methods in
NetUtils are required to support flexible SocketFactory implementations where
the sockets might not have a channel. Since the default socket factory returns
a socket with a channel, regular streams from Socket.getInput/OutputStream
methods seem to have a
[problem|http://www.nabble.com/blocking-read-on-a-socket-blocks-write-too--tt15294234.html].
3 files are added under hadoop/net and one file is removed from hadoop/ipc.
test-core passes.
> DataNode should have timeout on socket writes.
> ----------------------------------------------
>
> Key: HADOOP-2346
> URL: https://issues.apache.org/jira/browse/HADOOP-2346
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.15.1
> Reporter: Raghu Angadi
> Assignee: Raghu Angadi
> Fix For: 0.16.1
>
> Attachments: HADOOP-2346.patch, HADOOP-2346.patch, HADOOP-2346.patch
>
>
> If a client opens a file and stops reading in the middle, DataNode thread
> writing the data could be stuck forever. For DataNode sockets we set read
> timeout but not write timeout. I think we should add a write(data, timeout)
> method in IOUtils that assumes it the underlying FileChannel is non-blocking.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.