Hi Vinod,

This sounds to me like a DNS issue. I have the same thing on development
environments, and couple of things that made me big headache were:

- Inconsistency of DNS and machine hostname, if not used localhost
- On some systems in /etc/hosts you have localhost defined for both IPv4 and
IPv6 (MacOS is one example) where you need to comment IPv6.
- When I don't want to use localhost at all, I comment it out totally in
/etc/hosts

So, the approach I have in this case is to, prior starting doing anything
with HBase, stop everything, clean the logs, format namenode, and start
Hadoop only. Monitor logs, what won't be problem in this case as that's
stand alone installation, to see if HDFS get up OK. You can also try coping
file from and to your HDFS to be sure HDFS is working fine. Then you can
start MapReduce (JobTracker and TaskTrackers) and later on HBase.

I hope this will help :)

Regards,
Dejo

On Thu, Sep 29, 2011 at 8:06 PM, Vinod Gupta Tankala
<[email protected]>wrote:

> Thanks Dejo for pointing that out. I realized that earlier and fixed it.
> But
> I still hit the same problem.
>
> In my case, I only have a single host for now. But I am still trying to do
> a
> distributed setup by listing the machine itself as a slave in config and
> not
> using localhost anywhere. Does this even work? if not, I can try spending
> more time on pseudo-distributed setup for now.
>
> thanks
>
>
> On Thu, Sep 29, 2011 at 4:48 AM, Dejan Menges <[email protected]
> >wrote:
>
> > In core-site.xml, on first, you miss port in the end for HDFS:
> >
> >  <property>
> >   <name>fs.default.name</name>
> >   <value>hdfs://ec2-184-73-22-146.compute-1.amazonaws.com/</value>
> >  </property>
> >
> > Regards,
> > Dejo
> >
> > On Wed, Sep 28, 2011 at 6:21 PM, Vinod Gupta Tankala
> > <[email protected]>wrote:
> >
> > > Hi,
> > > I am trying to setup a test system to host a distributed hbase
> > > installation.
> > > No matter what I do, I get the below errors.
> > >
> > > 2011-09-28 22:17:26,288 WARN org.apache.hadoop.hdfs.DFSClient: Error
> > > Recovery fo
> > > r block null bad datanode[0] nodes == null
> > > 2011-09-28 22:17:26,288 WARN org.apache.hadoop.hdfs.DFSClient: Could
> not
> > > get
> > > blo
> > > ck locations. Source file "/tmp/mapred/system/jobtracker.info" -
> > > Aborting...
> > > 2011-09-28 22:17:26,288 WARN org.apache.hadoop.mapred.JobTracker:
> Writing
> > > to
> > > fil
> > > e hdfs://
> > > ec2-184-73-22-146.compute-1.amazonaws.com/tmp/mapred/system/jobtracker
> .
> > > info failed!
> > > 2011-09-28 22:17:26,288 WARN org.apache.hadoop.mapred.JobTracker:
> > > FileSystem
> > > is
> > > not ready yet!
> > > 2011-09-28 22:17:26,292 WARN org.apache.hadoop.mapred.JobTracker:
> Failed
> > to
> > > init
> > > ialize recovery manager.
> > > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> > > /tmp/mapred/sys
> > > tem/jobtracker.info could only be replicated to 0 nodes, instead of 1
> > >        at
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBloc
> > > k(FSNamesystem.java:1417)
> > > ....
> > >
> > > this is how i setup my config -
> > > core-site.xml -
> > > <configuration>
> > >
> > >  <property>
> > >    <name>fs.default.name</name>
> > >    <value>hdfs://ec2-184-73-22-146.compute-1.amazonaws.com/</value>
> > >  </property>
> > >
> > > </configuration>
> > >
> > > hdfs-site.xml -
> > > <configuration>
> > >
> > >  <property>
> > >    <name>dfs.replication</name>
> > >    <value>1</value>
> > >  </property>
> > >
> > >  <property>
> > >    <name>dfs.name.dir</name>
> > >    <value>/tmp/hbase</value>
> > >  </property>
> > >
> > >  <property>
> > >    <name>dfs.data.dir</name>
> > >    <value>/tmp/hbase</value>
> > >  </property>
> > >
> > > </configuration>
> > >
> > >
> > > mapred-site.xml -
> > > <configuration>
> > >
> > >  <property>
> > >    <name>mapred.job.tracker</name>
> > >    <value>ec2-184-73-22-146.compute-1.amazonaws.com:9001</value>
> > >  </property>
> > >
> > >  <property>
> > >    <name>mapred.local.dir</name>
> > >    <value>/tmp/mapred_tmp</value>
> > >  </property>
> > >
> > >  <property>
> > >    <name>mapred.map.tasks</name>
> > >    <value>10</value>
> > >  </property>
> > >
> > >  <property>
> > >    <name>mapred.reduce.tasks</name>
> > >    <value>2</value>
> > >  </property>
> > >
> > >  <property>
> > >    <name>mapred.system.dir</name>
> > >    <value>/tmp/mapred/system/</value>
> > >  </property>
> > >
> > >
> > > </configuration>
> > >
> > > i know that i am missing something really basic but not sure what it
> is.
> > > the
> > > documentation says mapred.system.dir should be globally accessible. how
> > do
> > > i
> > > achieve that?
> > >
> > > thanks
> > > vinod
> > >
> >
>

Reply via email to