Thanks Marcos. I applied the change you mentioned but it still gave me error. I then stop everything and restart Hadoop and tried to run a simple Map-Reduce job with the provided example jar. (./bin/hadoop jar hadoop-examples-1.0.3.jar pi 10 100)
That gave me an error of: 12/09/14 15:59:50 INFO mapred.JobClient: Task Id : attempt_201209141539_0001_m_000011_0, Status : FAILED Error initializing attempt_201209141539_0001_m_000011_0: java.io.IOException: BlockReader: error in packet header(chunkOffset : 142336, dataLen : 3538944, seqno : 3350829872548206857 (last: 0)) I think there is something wrong with my Hadoop setup. I will do more research and see if I can find out why. thanks, Jason On Thu, Sep 13, 2012 at 7:56 PM, Marcos Ortiz <[email protected]> wrote: > > Regards, Jason. > Answers in line > > > On 09/13/2012 06:42 PM, Jason Huang wrote: > > Hello, > > I am trying to set up HBase at pseudo-distributed mode on my Macbook. > I was able to installed hadoop and HBase and started the nodes. > > $JPS > 5417 TaskTracker > 5083 NameNode > 5761 HRegionServer > 5658 HMaster > 6015 Jps > 5613 HQuorumPeer > 5171 DataNode > 5327 JobTracker > 5262 SecondaryNameNode > > However, when I tried ./hbase shell I got the following error: > Trace/BPT trap: 5 > > Loooking at the log from master server I found: > 2012-09-13 18:33:46,842 DEBUG > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: > Looked up root region location, > connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@13d21d6; > serverName=192.168.1.124,60020,1347575067207 > 2012-09-13 18:34:18,981 DEBUG > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: > Looked up root region location, > connection=org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@13d21d6; > serverName=192.168.1.124,60020,1347575067207 > 2012-09-13 18:34:18,982 DEBUG > org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation: > locateRegionInMeta parentTable=-ROOT-, > metaLocation={region=-ROOT-,,0.70236052, hostname=192.168.1.124, > port=60020}, attempt=14 of 100 failed; retrying after sleep of 32044 > because: HRegionInfo was null or empty in -ROOT-, > row=keyvalues={.META.,,1/info:server/1347575458668/Put/vlen=19/ts=0, > .META.,,1/info:serverstartcode/1347575458668/Put/vlen=8/ts=0} > > I don't quite understand what this error is and how to fix it. Any > suggestions? Thanks! > > Here are my config files: > > <configuration> > <property> > <name>hbase.rootdir</name> > <value>hdfs://localhost:9000/hbase</value> > </property> > <property> > <name>hbase.zookeeper.quorum</name> > <value>localhost</value> > </property> > <property> > <name>hbase.cluster.distributed</name> > <value>true</value> > > If you want to use HBase in pseudo-distributed mode, you can not put this > property here, because the HRegionMaster thinks that the cluster is on full > distributed mode, and tries to find the region servers, and this error come > to light because in pseudo-distributed mode, you don´t have to include that. > > So, remove the hbase.cluster.distributed property, and restart all daemons. > > Another thing, for the pseudo-distributed mode, you don´t need a running > ZooKeeper cluster, you need that for a full distributed cluster. > > </property> > <property> > <name>dfs.replication</name> > <value>1</value> > </property> > <property> > <name>hbase.master</name> > <value>localhost:60000</value> > </property> > <property> > <name>dfs.support.append</name> > <value>true</value> > </property> > </configuration> > > hdfs-site.xml > <configuration> > <property> > <name>fs.default.name</name> > <value>localhost:9000</value> > </property> > <property> > <name>dfs.replication</name> > <value>1</value> > </property> > <property> > <name>dfs.namenode.name.dir</name> > <value>/Users/jasonhuang/hdfs/name</value> > </property> > <property> > <name>dfs.datanode.data.dir</name> > <value>/Users/jasonhuang/hdfs/data</value> > </property> > <property> > <name>dfs.datanode.max.xcievers</name> > <value>4096</value> > </property> > </configuration> > > mapred-site.xml > <configuration> > <property> > <name>mapred.job.tracker</name> > <value>localhost:9001</value> > </property> > <property> > <name>mapred.child.java.opts</name> > <value>-Xmx512m</value> > </property> > <property> > <name>mapred.job.tracker</name> > <value>hdfs://localhost:54311</value> > </property> > </configuration> > > > > -- > ** > > Marcos Luis Ortíz Valmaseda > *Data Engineer && Sr. System Administrator at UCI* > about.me/marcosortiz > My Blog <http://marcosluis2186.posterous.com> > Tumblr's blog <http://marcosortiz.tumblr.com/> > @marcosluis2186 <http://twitter.com/marcosluis2186> > ** > > > > <http://www.uci.cu/> > >
