[
https://issues.apache.org/jira/browse/HADOOP-8519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13399063#comment-13399063
]
Andy Isaacson commented on HADOOP-8519:
---------------------------------------
The error is a little different on 2.0:
{code}
2012-06-21 18:28:36,251 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
BlockSender.sendChunks() exception:
java.net.SocketTimeoutException: 480000 millis timeout while waiting for
channel to be ready for write. ch : java.nio.channels.SocketChannel[connected
local=/192.168.122.87:50010 remote=
/192.168.122.3:51436]
at
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
at
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164)
at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:482)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:634)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:252)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:88)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:63)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:189)
at java.lang.Thread.run(Thread.java:679)
2012-06-21 18:28:36,252 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src:
/192.168.122.87:50010, dest: /192.168.122.3:51436, bytes: 53697024, op:
HDFS_READ, cliID: D
FSClient_NONMAPREDUCE_-1988072026_1, offset: 0, srvID:
DS-706541979-127.0.1.1-50010-1339724203679, blockid:
BP-882164591-127.0.1.1-1339723952222:blk_-1935427635464392086_1010, duration:
482450603444
2012-06-21 18:28:36,252 WARN org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(192.168.122.87,
storageID=DS-706541979-127.0.1.1-50010-1339724203679, infoPort=50075, i
pcPort=50020,
storageInfo=lv=-40;cid=CID-02666c8e-a05e-480f-94df-f5226414f260;nsid=1569472409;c=0):Got
exception while serving
BP-882164591-127.0.1.1-1339723952222:blk_-19354276354643920
86_1010 to /192.168.122.3:51436
java.net.SocketTimeoutException: 480000 millis timeout while waiting for
channel to be ready for write. ch : java.nio.channels.SocketChannel[connected
local=/192.168.122.87:50010 remote=
/192.168.122.3:51436]
at
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
at
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164)
at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:482)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:634)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:252)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:88)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:63)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:189)
at java.lang.Thread.run(Thread.java:679)
2012-06-21 18:28:36,253 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
ubu-cdh-3:50010:DataXceiver error processing READ_BLOCK operation src:
/192.168.122.3:51436 dest: /192.168.122.87:50010
java.net.SocketTimeoutException: 480000 millis timeout while waiting for
channel to be ready for write. ch : java.nio.channels.SocketChannel[connected
local=/192.168.122.87:50010 remote=/192.168.122.3:51436]
at
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
at
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:164)
at
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:203)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:482)
at
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:634)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:252)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:88)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:63)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:189)
at java.lang.Thread.run(Thread.java:679)
{code}
> idle client socket triggers DN ERROR log (should be INFO or DEBUG)
> ------------------------------------------------------------------
>
> Key: HADOOP-8519
> URL: https://issues.apache.org/jira/browse/HADOOP-8519
> Project: Hadoop Common
> Issue Type: Bug
> Affects Versions: 0.20.2
> Environment: Red Hat Enterprise Linux Server release 6.2 (Santiago)
> Reporter: Jeff Lord
> Assignee: Andy Isaacson
>
> Datanode service is logging java.net.SocketTimeoutException at ERROR level.
> This message indicates that the datanode is not able to send data to the
> client because the client has stopped reading. This message is not really a
> cause for alarm and should be INFO level.
> 2012-06-18 17:47:13 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode
> DatanodeRegistration(x.x.x.x:50010,
> storageID=DS-196671195-10.10.120.67-50010-1334328338972, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.net.SocketTimeoutException: 480000 millis timeout while waiting for
> channel to be ready for write. ch : java.nio.channels.SocketChannel[connected
> local=/10.10.120.67:50010 remote=/10.10.120.67:59282]
> at
> org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
> at
> org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
> at
> org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:397)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:493)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:267)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:163)
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira