Hi,
We are running a three node cluster . From two days whenever we copy file
to hdfs , it is throwing  java.IO.Exception Bad connect ack with
firstBadLink . I searched in net, but not able to resolve the issue. The
following is the stack trace from datanode log

2012-05-04 18:08:08,868 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: writeBlock
blk_-7520371350112346377_50118 received exception java.net.SocketException:
Connection reset
2012-05-04 18:08:08,869 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
172.23.208.17:50010,
storageID=DS-1340171424-172.23.208.17-50010-1334672673051, infoPort=50075,
ipcPort=50020):DataXceiver
java.net.SocketException: Connection reset
        at java.net.SocketInputStream.read(SocketInputStream.java:168)
        at java.io.BufferedInputStream.read1(BufferedInputStream.java:256)
        at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
        at java.io.DataInputStream.read(DataInputStream.java:132)
        at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:262)
        at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:309)
        at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:373)
        at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:525)
        at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:357)
        at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:103)
        at java.lang.Thread.run(Thread.java:662)


It will be great if some one can point to the direction how to solve this
problem.

-- 
https://github.com/zinnia-phatak-dev/Nectar

Reply via email to