Hey Suresh!

The old way of setting the memory limits via mapreduce properties should
work after OOZIE-2896 <https://issues.apache.org/jira/browse/OOZIE-2896> See
that jira for some details.
However the new - and preferred - way of doing so is to add a launcher
configuration to the action like:

<launcher>
<memory.mb>4096</memory.mb>
</launcher>
This should go between the name-node and the job-xml tags or can go into
the workflow's <global> section to apply to every action in there.
See the Common schema
<https://oozie.apache.org/docs/5.0.0/WorkflowFunctionalSpec.html> here for
some details.

The third option is to increase the default by setting the
oozie.launcher.default.memory.mb in the oozie-site.xml.

Hope it helps,
gp

On Wed, Aug 29, 2018 at 10:02 PM Suresh V <verdi...@gmail.com> wrote:

> We recently launched an EMR cluster with latest version of Oozie that is
> Oozie 5.0.0.
>
> We understand the Oozie launcher is no more a mpareduce job in Yarn.
> We see that it shows up as 'Oozie launcher' in the Yarn UI.
>
> Our workflow has a Spark and a Sqoop job, and the launcher is failing first
> attempt with running out of memory error, causing it to try multiple
> attempts.
> Please advise where we can set the memory limits for Oozie launcher, in
> order to work around this error?
>
> AM Container for appattempt_1534385557019_0066_000001 exited with exitCode:
> -104
> Failing this attempt.Diagnostics: Container
> [pid=19109,containerID=container_1534385557019_0066_01_000001] is running
> beyond physical memory limits. Current usage: 2.2 GB of 2 GB physical
> memory used; 9.9 GB of 10 GB virtual memory used. Killing container.
>
>
> Thank you
> Suresh.
>


-- 
*Peter Cseh *| Software Engineer
cloudera.com <https://www.cloudera.com>

[image: Cloudera] <https://www.cloudera.com/>

[image: Cloudera on Twitter] <https://twitter.com/cloudera> [image:
Cloudera on Facebook] <https://www.facebook.com/cloudera> [image: Cloudera
on LinkedIn] <https://www.linkedin.com/company/cloudera>
------------------------------

Reply via email to