[
https://issues.apache.org/jira/browse/HDFS-6569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14037640#comment-14037640
]
Brandon Li commented on HDFS-6569:
----------------------------------
The client stack trace:
{noformat}
14/06/19 11:26:23 WARN hdfs.DFSClient: DFSOutputStream ResponseProcessor
exception for block BP-1063118919-10.11.1.167-1403201799003:blk_1073741828_1004
java.io.EOFException: Premature EOF: no length prefix available
at
org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1997)
at
org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:176)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:793)
14/06/19 11:26:23 WARN hdfs.DFSClient: DataStreamer Exception
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:29)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:69)
at sun.nio.ch.IOUtil.write(IOUtil.java:40)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:336)
at
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at
org.apache.hadoop.hdfs.DFSOutputStream$Packet.writeTo(DFSOutputStream.java:271)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:574)
put: All datanodes 127.0.0.1:50010 are bad. Aborting...
{noformat}
The DataNode error:
{noformat}
2014-06-19 11:26:23,668 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Shutting down for restart
(BP-1063118919-10.11.1.167-1403201799003:blk_1073741828_1004).
2014-06-19 11:26:23,669 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Sending an out of band ack of type OOB_RESTART
2014-06-19 11:26:23,669 INFO org.apache.hadoop.hdfs.server.datanode.DataNode:
Error sending OOB Ack.
java.io.IOException: The stream is closed
at
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:118)
at
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
at java.io.DataOutputStream.flush(DataOutputStream.java:106)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1339)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendOOBResponse(BlockReceiver.java:1041)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:802)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:741)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:124)
at
org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:234)
at java.lang.Thread.run(Thread.java:695)
{noformat}
> OOB massage can't be sent to the client when DataNode shuts down for upgrade
> ----------------------------------------------------------------------------
>
> Key: HDFS-6569
> URL: https://issues.apache.org/jira/browse/HDFS-6569
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: datanode
> Affects Versions: 3.0.0, 2.2.0
> Reporter: Brandon Li
>
> The socket is closed too early before the OOB message can be sent to client,
> which causes the write pipeline failure.
--
This message was sent by Atlassian JIRA
(v6.2#6252)