[ 
https://issues.apache.org/jira/browse/HDFS-4255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13507974#comment-13507974
 ] 

Harsh J commented on HDFS-4255:
-------------------------------

Fuller log, taken from a test case where we stop a DN after having a block 
in-write on it.

{code}
2012-12-01 19:10:23,043 INFO  hdfs.MiniDFSCluster 
(MiniDFSCluster.java:stopDataNode(1592)) - MiniDFSCluster Stopping DataNode 
127.0.0.1:63992 from a total of 3 datanodes.
2012-12-01 19:10:23,043 WARN  datanode.DirectoryScanner 
(DirectoryScanner.java:shutdown(289)) - DirectoryScanner: shutdown has been 
called
2012-12-01 19:10:23,047 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
SelectChannelConnector@localhost:0
2012-12-01 19:10:23,149 INFO  ipc.Server (Server.java:stop(2081)) - Stopping 
server on 63994
2012-12-01 19:10:23,162 INFO  ipc.Server (Server.java:run(685)) - Stopping IPC 
Server listener on 63994
2012-12-01 19:10:23,167 INFO  datanode.DataNode (BlockReceiver.java:run(968)) - 
PacketResponder: 
BP-1493454111-192.168.2.1-1354369220726:blk_-8775461920430955284_1002, 
type=LAST_IN_PIPELINE, downstreams=0:[]: Thread is interrupted.
2012-12-01 19:10:23,167 INFO  datanode.DataNode 
(BlockReceiver.java:receiveBlock(671)) - Exception for 
BP-1493454111-192.168.2.1-1354369220726:blk_-8775461920430955284_1002
java.io.InterruptedIOException: Interruped while waiting for IO on channel 
java.nio.channels.SocketChannel[closed]. 59874 millis timeout left.
        at 
org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:352)
        at 
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
        at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:159)
        at 
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:129)
        at java.io.FilterInputStream.read(FilterInputStream.java:116)
        at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
        at java.io.DataInputStream.read(DataInputStream.java:132)
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:414)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:641)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:505)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:98)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:66)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219)
        at java.lang.Thread.run(Thread.java:680)
2012-12-01 19:10:23,167 INFO  datanode.DataNode (BlockReceiver.java:run(955)) - 
PacketResponder: 
BP-1493454111-192.168.2.1-1354369220726:blk_-8775461920430955284_1002, 
type=HAS_DOWNSTREAM_IN_PIPELINE
java.io.EOFException: Premature EOF: no length prefix available
        at 
org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116)
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:905)
        at java.lang.Thread.run(Thread.java:680)
2012-12-01 19:10:23,165 INFO  ipc.Server (Server.java:run(827)) - Stopping IPC 
Server Responder
2012-12-01 19:10:23,165 INFO  datanode.DataNode (DataNode.java:shutdown(1126)) 
- Waiting for threadgroup to exit, active threads is 2
2012-12-01 19:10:23,168 INFO  datanode.DataNode 
(DataXceiver.java:writeBlock(536)) - opWriteBlock 
BP-1493454111-192.168.2.1-1354369220726:blk_-8775461920430955284_1002 received 
exception java.io.IOException: Interrupted receiveBlock
2012-12-01 19:10:23,167 INFO  datanode.DataNode (BlockReceiver.java:run(1006)) 
- PacketResponder: 
BP-1493454111-192.168.2.1-1354369220726:blk_-8775461920430955284_1002, 
type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2012-12-01 19:10:23,169 ERROR datanode.DataNode (DataXceiver.java:run(223)) - 
127.0.0.1:63992:DataXceiver error processing WRITE_BLOCK operation  src: 
/127.0.0.1:64003 dest: /127.0.0.1:63992
java.io.IOException: Interrupted receiveBlock
        at 
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:686)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:505)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:98)
        at 
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:66)
        at 
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:219)
        at java.lang.Thread.run(Thread.java:680)
{code}
                
> Useless stacktrace shown in DN when there's an error writing a block
> --------------------------------------------------------------------
>
>                 Key: HDFS-4255
>                 URL: https://issues.apache.org/jira/browse/HDFS-4255
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>    Affects Versions: 2.0.2-alpha
>            Reporter: Harsh J
>            Priority: Minor
>
> The DN sometimes carries these, especially when its asked to shutdown and 
> there's ongoing write activity. The stacktrace is absolutely useless and may 
> be improved, and the message it comes as part of is an INFO, which should not 
> be the case when a stacktrace is necessary to be print (indicative of a 
> trouble).
> {code}
> 2012-12-01 19:10:23,167 INFO  datanode.DataNode (BlockReceiver.java:run(955)) 
> - PacketResponder: 
> BP-1493454111-192.168.2.1-1354369220726:blk_-8775461920430955284_1002, 
> type=HAS_DOWNSTREAM_IN_PIPELINE
> java.io.EOFException: Premature EOF: no length prefix available
>       at 
> org.apache.hadoop.hdfs.protocol.HdfsProtoUtil.vintPrefixed(HdfsProtoUtil.java:171)
>       at 
> org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:116)
>       at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:905)
>       at java.lang.Thread.run(Thread.java:680)
> {code}
> Full scenario log in comments.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to