What Anil says. Sounds like your job is launched with default configs --
which are for local mode. You need to point it at your distributed cluster
install.
For MapReduce jobs, HADOOP_CLASSPATH needs to be set appropriately.
On Thursday, May 26, 2016, anil gupta wrote:
>
Hi,
It seems like your classpath is not setup correctly. /etc/hadoop/conf and
/etc/hbase/conf needs to be in MapReduce classpath. Are you able to run
row counter job of hbase in distributed cluster? What version of Hadoop you
are using? Did you use Ambari or Clodura Manager to install the
Hello everybody,
For a few days I developed a MapReduce code to insert values in HBase with
Phoenix. But the code runs only in local and overcharge the machine.
Whatever changes I make I observe that the mapred.LocalJobRunner class is
systematically used.
Do you have an idea of the problem?
Hello everybody,
For a few days I developed a MapReduce code to insert values in HBase with
Phoenix. But the code runs only in local and overcharge the machine.
Whatever changes I make I observe that the mapred.LocalJobRunner class is
systematically used.
Do you have an idea of the problem?
I