I faced a similar problem when running on a small test cluster. I'm not
aware about the memory requirements of your underlying map-reduce job, but
a prime cause of this 'deadlock' behaviour is because oozie map-reduce jobs
are launched by an oozie launcher job (which is a map only job). This
launcher job takes up task slots in the hadoop queue (the 'default' queue
is used by default) and stays alive till the underlying map-reduce job
terminates. If the the underlying map-reduce job gets no task slots it will
wait for resources to be released and the launcher will wait for the job to
terminate.

A way to mitigate this issue is by splitting your default queue into 2 or
more queues. Set the oozie.launcher.mapred.job.queue.name and
mapred.job.queue.name properties to different queues and you should be fine.

On Thu, Feb 11, 2016 at 9:06 AM, satish saley <[email protected]>
wrote:

> Could you please share the Hadoop logs for your job?
>
>
> Sent from Yahoo Mail for iPhone <https://yho.com/footer0>
>
>
> On Wednesday, February 10, 2016, 7:33 PM, tkg_cangkul <
> [email protected]> wrote:
>
> i've been trying to run oozie from virtual machine. but the process stuck
> at running state. i've set my memory of RM to 4gb and my Vcore is 2. is
> there any configuration that i've missed?
> pls help
>
> [image: capture from my resource manager]
>
>

Reply via email to