Is HADOOP_CLASSPATH=${HBSE_CONF_DIR}' pointing to the right location
on every machine in the cluster? While the job is running, you can go
on one slave machine, issue a "ps aux | grep java" and check if the
Child tasks have the correct classpath.

J-D

On Wed, Jul 21, 2010 at 3:03 PM, HAN LIU <[email protected]> wrote:
> Hi Guys,
>
> I have been fighting with this problem for a while now. Every time I try to 
> run a mapreduce job I get the 'cannot find quorum server from zoo.cfg' error. 
> It would be nice if you can suggest me a way out of it.
> Below is my setup:
>
> I am running HBase with 2 region servers. So in total there are three 
> machines one for master and two for region servers. I launch my mapreduce job 
> from a 4th machine. The job grabs data from somewhere in HDFS and insert them 
> to an HTable created on the 3 machines for HBase. I checked some resources 
> and it seems that I need hbase-site.xml on my clients' CLASSPATH, so I added 
> 'export HADOOP_CLASSPATH=${HBSE_CONF_DIR}' to hadoop-env.sh but it didn't 
> seem to work. I also tried some other ways to add in classpaths but haven't 
> got any luck so far. In the end I have to hardcode the configuration into my 
> java file to make it work, which is very bad habit and makes my code much 
> harder to maintain.
>
> I believe this problem is an easy fix but I am just stuck somewhere. Any 
> quick advice would be extremely helpful.
>
> Thanks in advance,
>
> Han

Reply via email to