Harsh J <harsh@...> writes:

> 
> Hello RX,
> 
> Could you paste your DFS configuration and the DN end-to-end log into
> a mail/pastebin-link?
> 
> On Fri, May 27, 2011 at 5:31 AM, Xu, Richard <richard.xu@...> wrote:
> > Hi Folks,
> >
> > We try to get hbase and hadoop running on clusters, take 2 Solaris servers 
for now.
> >
> > Because of the incompatibility issue between hbase and hadoop, we have to 
stick with hadoop
> 0.20.2-append release.
> >
> > It is very straight forward to make hadoop-0.20.203 running, but stuck for 
several days with
> hadoop-0.20.2, even the official release, not the append version.
> >
> > 1. Once try to run start-mapred.sh(hadoop-daemon.sh --config 
$HADOOP_CONF_DIR start jobtracker),
> following errors shown in namenode and jobtracker logs:
> >
> > 2011-05-26 12:30:29,169 WARN 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to
> place enough replicas, still in need of 1
> > 2011-05-26 12:30:29,175 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 4 on 9000, call
> addBlock(/tmp/hadoop-cfadm/mapred/system/jobtracker.info, DFSCl
> > ient_2146408809) from 169.193.181.212:55334: error: java.io.IOException: 
File
> /tmp/hadoop-cfadm/mapred/system/jobtracker.info could only be replicated to 0 
n
> > odes, instead of 1
> > java.io.IOException: File /tmp/hadoop-cfadm/mapred/system/jobtracker.info 
could only be
> replicated to 0 nodes, instead of 1
> >        at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesys
tem.java:1271)
> >        at 
org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.jav
a:25)
> >        at java.lang.reflect.Method.invoke(Method.java:597)
> >        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >        at java.security.AccessController.doPrivileged(Native Method)
> >        at javax.security.auth.Subject.doAs(Subject.java:396)
> >        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> >
> >
> > 2. Also, Configured Capacity is 0, cannot put any file to HDFS.
> >
> > 3. in datanode server, no error in logs, but tasktracker logs has the 
following suspicious thing:
> > 2011-05-25 23:36:10,839 INFO org.apache.hadoop.ipc.Server: IPC Server 
Responder: starting
> > 2011-05-25 23:36:10,839 INFO org.apache.hadoop.ipc.Server: IPC Server 
listener on 41904: starting
> > 2011-05-25 23:36:10,852 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 0 on 41904: starting
> > 2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 1 on 41904: starting
> > 2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 2 on 41904: starting
> > 2011-05-25 23:36:10,853 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 3 on 41904: starting
> >                                        .....
> > 2011-05-25 23:36:10,855 INFO org.apache.hadoop.ipc.Server: IPC Server 
handler 63 on 41904: starting
> > 2011-05-25 23:36:10,950 INFO org.apache.hadoop.mapred.TaskTracker: 
TaskTracker up at: localhost/127.0.0.1:41904
> > 2011-05-25 23:36:10,950 INFO org.apache.hadoop.mapred.TaskTracker: Starting 
tracker tracker_loanps3d:localhost/127.0.0.1:41904
> >
> >
> > I have tried all suggestions found so far, including
> >     1) remove hadoop-name and hadoop-data folders and reformat namenode;
> >     2) clean up all temp files/folders under /tmp;
> >
> > But nothing works.
> >
> > Your help is greatly appreciated.
> >
> > Thanks,
> >
> > RX
> >
> 

Hi,


I am able to start name node and data node,but while starting the 
jobatracker,it's troughing an error like 

FATAL mapred.JobTracker: java.net.BindException: Problem binding to 
localhost/127.0.0.1:5102 : Address already in use

kindly help me ASAP......................


regards,
Srinivas




Reply via email to