I could really use some help here. As you can see from the output below, the two attached datanodes are identified with a non-existent IP address. Can someone tell me how that gets selected or how to explicitly set it. Also, why are both datanodes shown under the same name/IP?
hadoop@hadoop-master:~$ hdfs dfsadmin -report Configured Capacity: 84482326528 (78.68 GB) Present Capacity: 75745546240 (70.54 GB) DFS Remaining: 75744862208 (70.54 GB) DFS Used: 684032 (668 KB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 ------------------------------------------------- Live datanodes (2): Name: 192.168.51.1:50010 (192.168.51.1) Hostname: hadoop-data1 Decommission Status : Normal Configured Capacity: 42241163264 (39.34 GB) DFS Used: 303104 (296 KB) Non DFS Used: 4302479360 (4.01 GB) DFS Remaining: 37938380800 (35.33 GB) DFS Used%: 0.00% DFS Remaining%: 89.81% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Fri Sep 25 13:25:37 UTC 2015 Name: 192.168.51.4:50010 (hadoop-master) Hostname: hadoop-master Decommission Status : Normal Configured Capacity: 42241163264 (39.34 GB) DFS Used: 380928 (372 KB) Non DFS Used: 4434300928 (4.13 GB) DFS Remaining: 37806481408 (35.21 GB) DFS Used%: 0.00% DFS Remaining%: 89.50% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1 Last contact: Fri Sep 25 13:25:38 UTC 2015 On Thu, Sep 24, 2015 at 5:05 PM, Daniel Watrous <[email protected]> wrote: > The IP address is clearly wrong, but I'm not sure how it gets set. Can > someone tell me how to configure it to choose a valid IP address? > > On Thu, Sep 24, 2015 at 3:26 PM, Daniel Watrous <[email protected]> > wrote: > >> I just noticed that both datanodes appear to have chosen that IP address >> and bound that port for HDFS communication. >> >> http://screencast.com/t/OQNbrWFF >> >> Any idea why this would be? Is there some way to specify which >> IP/hostname should be used for that? >> >> On Thu, Sep 24, 2015 at 3:11 PM, Daniel Watrous <[email protected]> >> wrote: >> >>> When I try to run a map reduce example, I get the following error: >>> >>> hadoop@hadoop-master:~$ hadoop jar >>> /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar >>> pi 10 30 >>> Number of Maps = 10 >>> Samples per Map = 30 >>> 15/09/24 20:04:28 INFO hdfs.DFSClient: Exception in >>> createBlockOutputStream >>> java.io.IOException: Got error, status message , ack with firstBadLink >>> as 192.168.51.1:50010 >>> at >>> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:140) >>> at >>> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1334) >>> at >>> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237) >>> at >>> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449) >>> 15/09/24 20:04:28 INFO hdfs.DFSClient: Abandoning >>> BP-852923283-127.0.1.1-1443119668806:blk_1073741825_1001 >>> 15/09/24 20:04:28 INFO hdfs.DFSClient: Excluding datanode >>> DatanodeInfoWithStorage[192.168.51.1:50010 >>> ,DS-45f6e06d-752e-41e8-ac25-ca88bce80d00,DISK] >>> 15/09/24 20:04:28 WARN hdfs.DFSClient: Slow waitForAckedSeqno took >>> 65357ms (threshold=30000ms) >>> Wrote input for Map #0 >>> >>> I'm not sure why it's trying to access 192.168.51.1:50010, which isn't >>> even a valid IP address in my setup. >>> >>> Daniel >>> >> >> >
