The IP address is clearly wrong, but I'm not sure how it gets set. Can someone tell me how to configure it to choose a valid IP address?
On Thu, Sep 24, 2015 at 3:26 PM, Daniel Watrous <[email protected]> wrote: > I just noticed that both datanodes appear to have chosen that IP address > and bound that port for HDFS communication. > > http://screencast.com/t/OQNbrWFF > > Any idea why this would be? Is there some way to specify which IP/hostname > should be used for that? > > On Thu, Sep 24, 2015 at 3:11 PM, Daniel Watrous <[email protected]> > wrote: > >> When I try to run a map reduce example, I get the following error: >> >> hadoop@hadoop-master:~$ hadoop jar >> /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar >> pi 10 30 >> Number of Maps = 10 >> Samples per Map = 30 >> 15/09/24 20:04:28 INFO hdfs.DFSClient: Exception in >> createBlockOutputStream >> java.io.IOException: Got error, status message , ack with firstBadLink as >> 192.168.51.1:50010 >> at >> org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:140) >> at >> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1334) >> at >> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1237) >> at >> org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:449) >> 15/09/24 20:04:28 INFO hdfs.DFSClient: Abandoning >> BP-852923283-127.0.1.1-1443119668806:blk_1073741825_1001 >> 15/09/24 20:04:28 INFO hdfs.DFSClient: Excluding datanode >> DatanodeInfoWithStorage[192.168.51.1:50010 >> ,DS-45f6e06d-752e-41e8-ac25-ca88bce80d00,DISK] >> 15/09/24 20:04:28 WARN hdfs.DFSClient: Slow waitForAckedSeqno took >> 65357ms (threshold=30000ms) >> Wrote input for Map #0 >> >> I'm not sure why it's trying to access 192.168.51.1:50010, which isn't >> even a valid IP address in my setup. >> >> Daniel >> > >
