[ 
https://issues.apache.org/jira/browse/HADOOP-3678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12655173#action_12655173
 ] 

Cosmin Lehene commented on HADOOP-3678:
---------------------------------------

I'm getting this with Hadoop 0.18.2. Reopening


2008-12-10 02:00:17,059 WARN org.apache.hadoop.dfs.DataNode: 
DatanodeRegistration(10.72.7.153:50010, 
storageID=DS-594776053-127.0.0.1-50010-1223411294140, infoPort=50075, 
ipcPort=50020):Got exception w
hile serving blk_698267671609892267_112619 to /10.72.7.153:
java.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcher.write0(Native Method)
        at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104)
        at sun.nio.ch.IOUtil.write(IOUtil.java:75)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
        at 
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
        at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:140)
        at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
        at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
        at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
        at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
        at java.io.DataOutputStream.flush(DataOutputStream.java:106)
        at 
org.apache.hadoop.dfs.DataNode$BlockSender.sendBlock(DataNode.java:2019)
        at 
org.apache.hadoop.dfs.DataNode$DataXceiver.readBlock(DataNode.java:1140)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:1068)
        at java.lang.Thread.run(Thread.java:619)

2008-12-10 02:00:17,060 ERROR org.apache.hadoop.dfs.DataNode: 
DatanodeRegistration(10.72.7.153:50010, 
storageID=DS-594776053-127.0.0.1-50010-1223411294140, infoPort=50075, 
ipcPort=50020):DataXceiver: j
ava.io.IOException: Connection reset by peer
        at sun.nio.ch.FileDispatcher.write0(Native Method)
        at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
        at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104)
        at sun.nio.ch.IOUtil.write(IOUtil.java:75)
        at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
        at 
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
        at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:140)
        at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
        at 
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
        at 
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
        at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
        at java.io.DataOutputStream.flush(DataOutputStream.java:106)
        at 
org.apache.hadoop.dfs.DataNode$BlockSender.sendBlock(DataNode.java:2019)
        at 
org.apache.hadoop.dfs.DataNode$DataXceiver.readBlock(DataNode.java:1140)
        at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:1068)
        at java.lang.Thread.run(Thread.java:619)



> Avoid spurious "DataXceiver: java.io.IOException: Connection reset by peer" 
> errors in DataNode log
> --------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-3678
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3678
>             Project: Hadoop Core
>          Issue Type: Bug
>    Affects Versions: 0.17.0
>            Reporter: Raghu Angadi
>            Assignee: Raghu Angadi
>            Priority: Blocker
>             Fix For: 0.17.2
>
>         Attachments: HADOOP-3678-branch-17.patch, HADOOP-3678.patch, 
> HADOOP-3678.patch
>
>
> When a client reads data using read(), it closes the sockets after it is 
> done. Often it might not read till the end of a block. The datanode on the 
> other side keeps writing data until the client connection is closed or end of 
> the block is reached. If the client does not read till the end of the block, 
> Datanode writes an error message and stack trace to the datanode log. It 
> should not. This is not an error and it just pollutes the log and confuses 
> the user.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to