This is the JT ID and there is no problem running jobs from command line,
just remote.
On Apr 15, 2013 4:24 PM, "Harsh J" <ha...@cloudera.com> wrote:

> Thats interesting; is the JT you're running on the cluster started
> with the ID 201304150711 or something else?
>
> On Mon, Apr 15, 2013 at 6:47 PM, Amit Sela <am...@infolinks.com> wrote:
> > The client prints the two lines I posted and the cluster shows nothing.
> Not
> > even incrementing the number of submitted jobs.
> >
> > On Apr 15, 2013 4:10 PM, "Harsh J" <ha...@cloudera.com> wrote:
> >>
> >> When you say "nothing happens"; where exactly do you mean? The client
> >> doesn't print anything, or the cluster doesn't run anything?
> >>
> >> On Mon, Apr 15, 2013 at 3:36 PM, Amit Sela <am...@infolinks.com> wrote:
> >> > Hi all,
> >> >
> >> > I'm trying to submit a mapreduce job remotely using job.submit()
> >> >
> >> > I get the following:
> >> >
> >> > [WARN ] org.apache.hadoop.mapred.JobClient       » Use
> >> > GenericOptionsParser
> >> > for parsing the arguments. Applications should implement Tool for the
> >> > same.
> >> > [INFO ] org.apache.hadoop.mapred.JobClient       » Cleaning up the
> >> > staging
> >> > area hdfs://{namenode
> >> >
> >> >
> address}:{port}{hadoop.tmp.dir}/mapred/staging/myusername/.staging/job_201304150711_0022
> >> >
> >> > and nothing happens...
> >> >
> >> > I set the the mapred.job.tracker and changed permissions for
> >> > hadoop.tmp.dir.
> >> > I also set "hadoop.job.ugi" as "hadoop,supergroup" but some how I
> think
> >> > that
> >> > it's not making any difference.
> >> > The system submitting the job is running with another user, call it:
> >> > myusername and not hadoop.
> >> >
> >> > I believe it is related to the user permissions but I can't seem to
> get
> >> > it
> >> > right.
> >> >
> >> > Thanks for the help,
> >> >
> >> > Amit.
> >> >
> >>
> >>
> >>
> >> --
> >> Harsh J
>
>
>
> --
> Harsh J
>

Reply via email to