Vamsi,

One easy hack is to:
config.set("mapred.job.tracker", "jthost:port");

(Or better yet, use the Tool interface always to write your Hadoop jobs and 
then you can simply pass a "-jt <host:port>" in the command-line when you want 
it to run against a cluster.

On 13-Dec-2011, at 8:43 AM, Vamshi Krishna wrote:

> what i shoud set in job's classpath ? where should i do setting class path
> for job and how ? My requirement is to run the MR jobs on the cluster of
> nodes and NOT by LocalJobRunner , when i start the program form eclipse.
> please help me..
> My snippet of code for job settings is here, are there any more settings i
> need to add here,
> 
> public static void main(String args[]) throws IOException,
> InterruptedException, ClassNotFoundException
>    {
> 
>        Configuration config=HBaseConfiguration.create();
>        Job job=new Job(config, "SET-Insetion");
>        job.setJarByClass(MyHset.class);
>        job.setMapperClass(setInsertionMapper.class);
> 
>        ...
>        ...
> 
> On Mon, Dec 12, 2011 at 11:35 PM, Jean-Daniel Cryans 
> <[email protected]>wrote:
> 
>> That setting also needs to be in your job's classpath, it won't guess it.
>> 
>> J-D
>> 
>> On Thu, Dec 8, 2011 at 10:14 PM, Vamshi Krishna <[email protected]>
>> wrote:
>>> Hi harsh,
>>> ya, i no jobs are seen in that jobtracker page, under RUNNING JOBS it is
>>> none, under FINISHED JOBS it is none,FAILED JOBS it is none . its just
>> like
>>> no job is running. In eclipse i could see during mapreduce program
>> running,
>>> as you said "LOcalJobRunner", may be  Eclipse is merely launching the
>>> program via a LocalJobRunner.
>>> I ran like this,
>>> 
>>> 1) right click on my main java file-> run as-> java application    , So,
>> it
>>> happened as i mentioned.
>>> 
>>> So, i tried even doing this,
>>> 
>>> 2) right click on my main java file-> run as-> Run on hadoop,    Now
>>> nothing is happening, i mean to say, no job is created, no process seems
>> to
>>> be started then, i checked even the jobtracker, task tracker pages also,
>>> there also i colud see no jobs are running, all are none.
>>> 
>>> But actually if i see my mared-site.xml file in conf directory of hadoop,
>>> its like this
>>> 
>>> <name>mapred.job.tracker</name>
>>> <value>hadoop-namenode:9001</value>
>>> 
>>> this hadoop-namenode's ip address is 10.0.1.54, i am running my mapreduce
>>> job from an eclipse ,which is residing on the same machine. so, the
>>> mapred.job.tracker is set to one machine and port, so, then it should be
>>> submitted as distributed job, right? But why this is not happening?
>>> On all machines , all daemons are running.
>>> what i should do to run it  on clusetr from the eclipse.. please  help..
>>> On Thu, Dec 8, 2011 at 12:12 PM, Harsh J <[email protected]> wrote:
>>> 
>>>> Do you not see progress, or do you not see a job at all?
>>>> 
>>>> Perhaps the problem is that your Eclipse is merely launching the program
>>>> via a LocalJobRunner, and not submitting to the cluster. This is cause
>> of
>>>> improper config setup (you need "mapred.job.tracker" set at minimum, to
>>>> submit a distributed job).
>>>> 
>>>> On 08-Dec-2011, at 12:10 PM, Vamshi Krishna wrote:
>>>> 
>>>>> Hi all,
>>>>> i am running hbase on 3 machine cluster.i am running a mapreduce
>> program
>>>> to
>>>>> insert data into an hbase table from elipse, so when its running i
>> opened
>>>>> hadoop jobtracker and tasktracker pages
>>>>> (http://10.0.1.54:50030 and http://10.0.1.54:50060) on browser, i
>> could
>>>>> find no changes or progress of mapreduce jobs, like map tasks'
>> progresses
>>>>> etc.. What is the problem, how can i see their progress on the
>> browser,
>>>>> while mapreduce program is running from eclipse? i am using
>> ubuntu-10.04
>>>>> 
>>>>> can anybody help?
>>>>> 
>>>>> --
>>>>> *Regards*
>>>>> *
>>>>> Vamshi Krishna
>>>>> *
>>>> 
>>>> 
>>> 
>>> 
>>> --
>>> *Regards*
>>> *
>>> Vamshi Krishna
>>> *
>> 
> 
> 
> 
> -- 
> *Regards*
> *
> Vamshi Krishna
> *

Reply via email to