[ 
https://issues.apache.org/jira/browse/HDFS-1918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13031958#comment-13031958
 ] 

Jean-Daniel Cryans commented on HDFS-1918:
------------------------------------------

Here's an example of what I see in 0.20-append (I looked at trunk and it has 
the same issue albeit different stack traces):

{noformat}
2011-05-04 18:18:54,956 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(10.4.5.38:50010, 
storageID=DS-610395285-10.10.21.45-50010-1269377100398, infoPort=50075, 
ipcPort=50020):Got exception while serving blk_8258046632296640241_12489573 to 
/10.4.5.38:
java.net.SocketTimeoutException: 480000 millis timeout while waiting for 
channel to be ready for write. ch : java.nio.channels.SocketChannel[connected 
local=/10.4.5.38:50010 remote=/10.4.5.38:57636]
        at 
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
        at 
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:350)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:436)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:197)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)

2011-05-04 18:18:54,956 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: 
DatanodeRegistration(10.4.5.38:50010, 
storageID=DS-610395285-10.10.21.45-50010-1269377100398, infoPort=50075, 
ipcPort=50020):DataXceiver
java.net.SocketTimeoutException: 480000 millis timeout while waiting for 
channel to be ready for write. ch : java.nio.channels.SocketChannel[connected 
local=/10.4.5.38:50010 remote=/10.4.5.38:57636]
        at 
org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
        at 
org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:159)
        at 
org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:198)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendChunks(BlockSender.java:350)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:436)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:197)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:110)

{noformat}

> DataXceiver double logs every IOE out of readBlock
> --------------------------------------------------
>
>                 Key: HDFS-1918
>                 URL: https://issues.apache.org/jira/browse/HDFS-1918
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 0.20.2
>            Reporter: Jean-Daniel Cryans
>            Priority: Trivial
>             Fix For: 0.22.0
>
>
> DataXceiver will log an IOE twice because opReadBlock() will catch it, log a 
> WARN, then throw it again only to be caught in run() as a Throwable and 
> logged as an ERROR. As far as I can tell all the information is the same in 
> both messages.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to