Hi, I am getting the following error when trying to distcp a file from S3 to an HDFS cluster. The job is running on YARN. It seems the containers are not being setup appropriately. When I browse logs on node manager, it says the container was not found meaning that it never launched. I see the same logs as below in the node manager logs.
java.io.IOException: DistCp failure: Job job_1440119911415_0001 has failed: Application application_1440119911415_0001 failed 2 times due to AM Container for appattempt_1440119911415_0001_000002 exited with exitCode: 1 For more detailed output, check application tracking page: http://hadoop2testnn.ec2.pin220.com:8088/cluster/app/application_1440119911415_0001Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1440119911415_0001_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:545) at org.apache.hadoop.util.Shell.run(Shell.java:456) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745)
