Hi,
I am new to Hadoop. I am using Hadoop 0.20.2 version. I tried to copy a file
of size 300 MB from local to HDFS. It showed the error as below. Please help
me in solving this issue.
11/01/26 13:01:52 WARN hdfs.DFSClient: DataStreamer Exception:
java.io.IOException: An existing connection was forcibly closed by the
remote host
at sun.nio.ch.SocketDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:33)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104)
at sun.nio.ch.IOUtil.write(IOUtil.java:75)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
at
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
at
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2314)
11/01/26 13:01:52 WARN hdfs.DFSClient: Error Recovery for block
blk_4184614741505116937_1012 bad datanode[0] 160.110.184.114:50010
11/01/26 13:01:52 WARN hdfs.DFSClient: Error Recovery for block
blk_4184614741505116937_1012 in pipeline 160.110.184.114:50010,
160.110.184.111:50010: bad datanode 160.110.184.114:50010
11/01/26 13:01:55 WARN hdfs.DFSClient: Error Recovery for block
blk_4184614741505116937_1012 failed because recovery from primary datanode
160.110.184.111:50010 failed 1 times. Pipeline was 160.110.184.114:50010,
160.110.184.111:50010. Will retry...
11/01/26 13:01:55 WARN hdfs.DFSClient: Error Recovery for block
blk_4184614741505116937_1012 bad datanode[0] 160.110.184.114:50010
11/01/26 13:01:55 WARN hdfs.DFSClient: Error Recovery for block
blk_4184614741505116937_1012 in pipeline 160.110.184.114:50010,
160.110.184.111:50010: bad datanode 160.110.184.114:50010
11/01/26 13:02:28 WARN hdfs.DFSClient: DataStreamer Exception:
java.io.IOException: An existing connection was forcibly closed by the
remote host
at sun.nio.ch.SocketDispatcher.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:33)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:104)
at sun.nio.ch.IOUtil.write(IOUtil.java:75)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
at
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
at
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
at
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105)
at java.io.DataOutputStream.write(DataOutputStream.java:90)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2314)
11/01/26 13:02:28 WARN hdfs.DFSClient: Error Recovery for block
blk_4184614741505116937_1013 bad datanode[0] 160.110.184.111:50010
copyFromLocal: All datanodes 160.110.184.111:50010 are bad. Aborting...
11/01/26 13:02:28 ERROR hdfs.DFSClient: Exception closing file
/hdfs/data/input/cdr10M.csv : java.io.IOException: All datanodes
160.110.184.111:50010 are bad. Aborting...
java.io.IOException: All datanodes 160.110.184.111:50010 are bad.
Aborting...
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2556)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$1600(DFSClient.java:2102)
at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2265)
--
With Regards,
Karthik