Hi all, I'm a student and I have been tryin to set up the hadoop cluster for
a while
but have been unsuccessful till now.

I'm tryin to setup a 4-node cluster
1 - namenode
1 - job tracker
2 - datanode / task tracker

version of hadoop - 0.18.3

*config in hadoop-site.xml* (which I have replicated in all the 4 nodes conf
directory)

*******
 <configuration>
 <property>
  <name>mapred.job.tracker </name>
  <value>swin07.xx.xx.edu:9001</value>
 </property>
 <property>
  <name>fs.default.name</name>
  <value>hdfs://swin06.xx.xx.edu:9000</value>
 </property>
 <property>
  <name>dfs.data.dir</name>
  <value>/home/kesivakumar/hadoop/dfs/data</value>
  <final>true</final>
 </property>
<property>
  <name>dfs.name.dir</name>
  <value>/home/kesivakumar/hadoop/dfs/name</value>
  <final>true</final>
 </property>
 <property>
  <name>hadoop.tmp.dir </name>
  <value>/tmp/hadoop </value>
  <final>true</final>
 </property>
 <property>
  <name>mapred.system.dir</name>
  <value>/hadoop/mapred/system</value>
  <final>true</final>
 </property>
 <property>
  <name>dfs.replication</name>
  <value>1</value>
 </property>
</configuration>

*******


The problem is both of my datanodes are dead..
The *slave files are configured properly* and to my surprise the
tasktrackers are running
(checked thru swin07:50030 and it showed 2 tasktrackers running)
(swin06:50070 showed namenode is active but 0 data nodes active)
so when i try copying conf dir into the filesystem using -put cmd
it throws me errors. Below is the last-part of the error o/p



********

09/06/25 00:36:30 WARN dfs.DFSClient: NotReplicatedYetException sleeping
/user/kesivakumar/input/hadoop-env.sh retries left 1
09/06/25 00:36:34 WARN dfs.DFSClient: DataStreamer Exception:
org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
/user/kesivakumar/input/hadoop-env.sh could only be replicated to 0 nodes,
instead of 1
        at
org.apache.hadoop.dfs.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1123)
        at org.apache.hadoop.dfs.NameNode.addBlock(NameNode.java:330)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:481)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:890)
        at org.apache.hadoop.ipc.Client.call(Client.java:716)         at
org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216)
        at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:597)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
        at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
        at org.apache.hadoop.dfs.$Proxy0.addBlock(Unknown Source)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2450)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2333)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1800(DFSClient.java:1745)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1922)

09/06/25 00:36:34 WARN dfs.DFSClient: Error Recovery for block null bad
datanode[0]
put: Could not get block locations. Aborting...
Exception closing file /user/kesivakumar/input/hadoop-env.sh
java.io.IOException: Could not get block locations. Aborting...
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2153)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.access$1400(DFSClient.java:1745)
        at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:1899)

*********


When I tried  bin/hadoop dfsadmin -report :- it says that 0 datanodes are
available..

And also while formatting the namenode usin  bin/hadoop namenode -format
the format is done at  swin06/*127.0.1.1*   --->   why it is not gettin done
at 127.0.0.1 ???

How can I rectify these errors ???

Any help would be greatly appreciated

Thank You..

Reply via email to