It was due to firewall configuration, port 50010 was not in the allow list.

 

My cluster servers use ufw as the firewall, once I disabled it the error went 
away and I was able to test run the cluster.

 

To disable ufw from the terminal: sudo ufw disable

 

Is there a list of ports needed by Hadoop to make firewall configuration easier?

 

Many Thanks! Caesar.

 

From: Vishnu Viswanath [mailto:[email protected]] 
Sent: Thursday, June 04, 2015 1:51 AM
To: [email protected]
Subject: Re: ack with firstBadLink as 192.168.1.12:50010?

 

I had seen this issue. and it was due to data nodes not able to process those 
many requests at a time. 

 

 

On Thu, Jun 4, 2015 at 11:14 AM, Arpit Agarwal <[email protected]> wrote:

I recall seeing this error due to a network misconfiguration. You may want to 
verify that IP addresses and host names are correctly setup.

 

From: Caesar Samsi
Reply-To: "[email protected]"
Date: Wednesday, June 3, 2015 at 8:07 PM
To: "[email protected]"
Subject: ack with firstBadLink as 192.168.1.12:50010?

 

I’ve just built my distributed cluster but am getting the following error when 
I try to use HDFS.

 

I’ve traced it by telnet to 192.168.1.12 50010 and it just waits there waiting 
for a connection but never happens.

 

If I telnet on that host using localhost (127.0.0.1) the telnet connection 
happens immediately.

 

What could be the cause?

 

>> 

 

hduser@hadoopmaster ~/hadoop $ hdfs dfs -copyFromLocal input input

15/06/03 20:03:36 INFO hdfs.DFSClient: Exception in createBlockOutputStream

java.io.IOException: Got error, status message , ack with firstBadLink as 
192.168.1.12:50010

                at 
org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:140)

                at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1334)

                at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237)

                at 
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449)

15/06/03 20:03:36 INFO hdfs.DFSClient: Abandoning 
BP-101149352-192.168.1.10-1433386347922:blk_1073741829_1005

15/06/03 20:03:36 INFO hdfs.DFSClient: Excluding datanode 
DatanodeInfoWithStorage[192.168.1.12:50010,DS-1347a6fe-6bad-4df8-88cb-21378b847839,DISK]

15/06/03 20:03:36 WARN hdfs.DFSClient: Slow waitForAckedSeqno took 70947ms 
(threshold=30000ms)

 

Reply via email to