Hello,
I am trying to use oozie 4.0.0 with hadoop 2.2.0 but I am getting following
error:
Application application_1408366725874_0004 failed 2 times due to AM
Container for appattempt_1408366725874_0004_000002 exited with exitCode: 1
due to: Exception from container-launch:
org.apache.hadoop.util.Shell$ExitCodeException:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)
at org.apache.hadoop.util.Shell.run(Shell.java:379)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)
at
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)
at
org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
.Failing this attempt.. Failing the application.
Which is not descriptive enough. It is simple M/R job submitted as
workflow. Hadoop settings I would say are correct - oozie.proxyuser.group
and hosts set. The workflow is running fine when I have all installed on a
single machine (pseudo-distributed mode). It is failing when hdfs cluster
is in High availability. I am not sure how the workflow job should be set.
I selected active namenode or ha service. Neither of those is running.
Unfortunately yarn logs are not available as well - seems it fails pretty
soon.
Job.properties
nameNode=hdfs://namenodeha
jobTracker=bd-prg-dev1-rm1:8050
queueName=default
examplesRoot=examples
oozie.wf.application.path=${nameNode}/user/${user.name
}/${examplesRoot}/apps/map-reduce
outputDir=/user/${user.name}/${examplesRoot}/output-data/map-reduce
inputDir=/user/${user.name}/${examplesRoot}/input-data/text
Workflow.xml
<workflow-app xmlns="uri:oozie:workflow:0.5" name="map-reduce-wf">
<global>
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<configuration>
<property>
<name>mapred.job.queue.name</name>
<value>${queueName}</value>
</property>
</configuration>
</global>
<start to="mr-node"/>
<action name="mr-node">
<map-reduce>
<prepare>
<delete path="${nameNode}/${outputDir}"/>
</prepare>
<configuration>
<property>
<name>mapred.mapper.class</name>
<value>org.apache.oozie.example.SampleMapper</value>
</property>
<property>
<name>mapred.reducer.class</name>
<value>org.apache.oozie.example.SampleReducer</value>
</property>
<property>
<name>mapred.map.tasks</name>
<value>1</value>
</property>
<property>
<name>mapred.input.dir</name>
<value>${inputDir}</value>
</property>
<property>
<name>mapred.output.dir</name>
<value>${outputDir}</value>
</property>
</configuration>
</map-reduce>
<ok to="end"/>
<error to="fail"/>
</action>
<kill name="fail">
<message>Map/Reduce failed, error
message[${wf:errorMessage(wf:lastErrorNode())}]</message>
</kill>
<end name="end"/>
</workflow-app>
Any hints would be appreciated.
--
Jakub