Ok. I dont have a firewall so that shouldnt be a problem. I'll look into the other things once. How can I point the system to use a particular config file? Arent those fixed to hadoop-default.xml and hadoop-site.xml?
On Sun, Feb 1, 2009 at 5:49 PM, jason hadoop <[email protected]> wrote: > It is possible that your slaves are unable to contact the master due to a > network routing, firewall or hostname resolution error. > > The alternative is that your namenode is either failing to start, or > running > from a different configuration file and binding to a different port. > > On Fri, Jan 30, 2009 at 2:59 PM, Amandeep Khurana <[email protected]> > wrote: > > > Here's the log from the datanode: > > > > 2009-01-30 14:54:18,019 INFO org.apache.hadoop.ipc.Client: Retrying > connect > > to server: rndpc1/171.69.102.51:9000. Already tried 8 time(s). > > 2009-01-30 14:54:19,022 INFO org.apache.hadoop.ipc.Client: Retrying > connect > > to server: rndpc1/171.69.102.51:9000. Already tried 9 time(s). > > 2009-01-30 14:54:19,026 ERROR org.apache.hadoop.dfs.DataNode: > > java.io.IOException: Call failed on local exception > > at org.apache.hadoop.ipc.Client.call(Client.java:718) > > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216) > > at org.apache.hadoop.dfs.$Proxy4.getProtocolVersion(Unknown > Source) > > at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319) > > at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:306) > > at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:343) > > at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:288) > > at org.apache.hadoop.dfs.DataNode.startDataNode(DataNode.java:277) > > at org.apache.hadoop.dfs.DataNode.<init>(DataNode.java:223) > > at org.apache.hadoop.dfs.DataNode.makeInstance(DataNode.java:3031) > > at > > org.apache.hadoop.dfs.DataNode.instantiateDataNode(DataNode.java:2986) > > at > org.apache.hadoop.dfs.DataNode.createDataNode(DataNode.java:2994) > > at org.apache.hadoop.dfs.DataNode.main(DataNode.java:3116) > > Caused by: java.net.ConnectException: Connection refused > > at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) > > at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source) > > at sun.nio.ch.SocketAdaptor.connect(Unknown Source) > > at > > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:300) > > at > > org.apache.hadoop.ipc.Client$Connection.access$1700(Client.java:177) > > at org.apache.hadoop.ipc.Client.getConnection(Client.java:789) > > at org.apache.hadoop.ipc.Client.call(Client.java:704) > > ... 12 more > > > > What do I need to do for this? > > > > Amandeep > > > > > > Amandeep Khurana > > Computer Science Graduate Student > > University of California, Santa Cruz > > > > > > On Fri, Jan 30, 2009 at 2:49 PM, Amandeep Khurana <[email protected]> > > wrote: > > > > > Hi, > > > > > > I am a new user and was setting up the HDFS on 3 nodes as of now. I > could > > > get them to run individual pseudo distributed setups but am unable to > get > > > the cluster going together. The site localhost:50070 shows me that > there > > are > > > no datanodes. > > > > > > I kept the same hadoop-site.xml as the pseudodistributed setup on the > > > master node and added the slaves to the list of slaves in the conf > > > directory. Thereafter, I ran the start-dfs.sh and start-mapred.sh > > scripts. > > > > > > Am I missing something out? > > > > > > Amandeep > > > > > > > > > Amandeep Khurana > > > Computer Science Graduate Student > > > University of California, Santa Cruz > > > > > >
