Hi Kristoffer,

As far as I can tell, you have to package classes into jar before
submitting a job.

"hadoop jar" is the simplest approach to submit jobs. There are other
approaches though. MR uses mapred.job.tracker to determine whether to run
job remotely or locally. "hadoop jar" command will set it to the configured
job tracker address automatically, so the job is submitted to a remote
cluster.


2014-02-20 5:01 GMT+08:00 Kristoffer Sjögren <[email protected]>:

> Hi
>
> Im running the crunch wordcount example using ToolRunner.run (from
> intellij) and data is read from hdfs but the actual job is running locally
> instead of on the remote cluster.
>
> Do I need to use hadoop jar command using a pre packaged jar? Or is there
> any way to  kick off a remote job?
>
> Cheers,
> -Kristoffer
>

Reply via email to