Re: issue while submitting Spark Job as --master yarn-cluster

2015-03-25 Thread Sandy Ryza
Hi Sachin,

It appears that the application master is failing.  To figure out what's
wrong you need to get the logs for the application master.

-Sandy

On Wed, Mar 25, 2015 at 7:05 AM, Sachin Singh 
wrote:

> OS I am using Linux,
> when I will run simply as master yarn, its running fine,
>
> Regards
> Sachin
>
> On Wed, Mar 25, 2015 at 4:25 PM, Xi Shen  wrote:
>
>> What is your environment? I remember I had similar error when "running
>> spark-shell --master yarn-client" in Windows environment.
>>
>>
>> On Wed, Mar 25, 2015 at 9:07 PM sachin Singh 
>> wrote:
>>
>>> Hi ,
>>> when I am submitting spark job in cluster mode getting error as under in
>>> hadoop-yarn  log,
>>> someone has any idea,please suggest,
>>>
>>> 2015-03-25 23:35:22,467 INFO
>>> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
>>> application_1427124496008_0028 State change from FINAL_SAVING to FAILED
>>> 2015-03-25 23:35:22,467 WARN
>>> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs
>>> OPERATION=Application Finished - Failed TARGET=RMAppManager
>>>  RESULT=FAILURE
>>> DESCRIPTION=App failed with state: FAILED   PERMISSIONS=Application
>>> application_1427124496008_0028 failed 2 times due to AM Container for
>>> appattempt_1427124496008_0028_02 exited with  exitCode: 13 due to:
>>> Exception from container-launch.
>>> Container id: container_1427124496008_0028_02_01
>>> Exit code: 13
>>> Stack trace: ExitCodeException exitCode=13:
>>> at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
>>> at org.apache.hadoop.util.Shell.run(Shell.java:455)
>>> at
>>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
>>> Shell.java:702)
>>> at
>>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.
>>> launchContainer(DefaultContainerExecutor.java:197)
>>> at
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.
>>> launcher.ContainerLaunch.call(ContainerLaunch.java:299)
>>> at
>>> org.apache.hadoop.yarn.server.nodemanager.containermanager.
>>> launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(
>>> ThreadPoolExecutor.java:1145)
>>> at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(
>>> ThreadPoolExecutor.java:615)
>>> at java.lang.Thread.run(Thread.java:745)
>>>
>>>
>>> Container exited with a non-zero exit code 13
>>> .Failing this attempt.. Failing the application.
>>> APPID=application_1427124496008_0028
>>>
>>>
>>>
>>> --
>>> View this message in context: http://apache-spark-user-list.
>>> 1001560.n3.nabble.com/issue-while-submitting-Spark-Job-as-
>>> master-yarn-cluster-tp0.html
>>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>>
>>> -
>>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: user-h...@spark.apache.org
>>>
>>>
>


Re: issue while submitting Spark Job as --master yarn-cluster

2015-03-25 Thread Sachin Singh
OS I am using Linux,
when I will run simply as master yarn, its running fine,

Regards
Sachin

On Wed, Mar 25, 2015 at 4:25 PM, Xi Shen  wrote:

> What is your environment? I remember I had similar error when "running
> spark-shell --master yarn-client" in Windows environment.
>
>
> On Wed, Mar 25, 2015 at 9:07 PM sachin Singh 
> wrote:
>
>> Hi ,
>> when I am submitting spark job in cluster mode getting error as under in
>> hadoop-yarn  log,
>> someone has any idea,please suggest,
>>
>> 2015-03-25 23:35:22,467 INFO
>> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
>> application_1427124496008_0028 State change from FINAL_SAVING to FAILED
>> 2015-03-25 23:35:22,467 WARN
>> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs
>> OPERATION=Application Finished - Failed TARGET=RMAppManager
>>  RESULT=FAILURE
>> DESCRIPTION=App failed with state: FAILED   PERMISSIONS=Application
>> application_1427124496008_0028 failed 2 times due to AM Container for
>> appattempt_1427124496008_0028_02 exited with  exitCode: 13 due to:
>> Exception from container-launch.
>> Container id: container_1427124496008_0028_02_01
>> Exit code: 13
>> Stack trace: ExitCodeException exitCode=13:
>> at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
>> at org.apache.hadoop.util.Shell.run(Shell.java:455)
>> at
>> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
>> at
>> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.
>> launchContainer(DefaultContainerExecutor.java:197)
>> at
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.
>> launcher.ContainerLaunch.call(ContainerLaunch.java:299)
>> at
>> org.apache.hadoop.yarn.server.nodemanager.containermanager.
>> launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(
>> ThreadPoolExecutor.java:1145)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(
>> ThreadPoolExecutor.java:615)
>> at java.lang.Thread.run(Thread.java:745)
>>
>>
>> Container exited with a non-zero exit code 13
>> .Failing this attempt.. Failing the application.
>> APPID=application_1427124496008_0028
>>
>>
>>
>> --
>> View this message in context: http://apache-spark-user-list.
>> 1001560.n3.nabble.com/issue-while-submitting-Spark-Job-as-
>> master-yarn-cluster-tp0.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> -
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>


Re: issue while submitting Spark Job as --master yarn-cluster

2015-03-25 Thread Xi Shen
What is your environment? I remember I had similar error when "running
spark-shell --master yarn-client" in Windows environment.


On Wed, Mar 25, 2015 at 9:07 PM sachin Singh 
wrote:

> Hi ,
> when I am submitting spark job in cluster mode getting error as under in
> hadoop-yarn  log,
> someone has any idea,please suggest,
>
> 2015-03-25 23:35:22,467 INFO
> org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl:
> application_1427124496008_0028 State change from FINAL_SAVING to FAILED
> 2015-03-25 23:35:22,467 WARN
> org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hdfs
> OPERATION=Application Finished - Failed TARGET=RMAppManager
>  RESULT=FAILURE
> DESCRIPTION=App failed with state: FAILED   PERMISSIONS=Application
> application_1427124496008_0028 failed 2 times due to AM Container for
> appattempt_1427124496008_0028_02 exited with  exitCode: 13 due to:
> Exception from container-launch.
> Container id: container_1427124496008_0028_02_01
> Exit code: 13
> Stack trace: ExitCodeException exitCode=13:
> at org.apache.hadoop.util.Shell.runCommand(Shell.java:538)
> at org.apache.hadoop.util.Shell.run(Shell.java:455)
> at
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
> at
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.
> launchContainer(DefaultContainerExecutor.java:197)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:299)
> at
> org.apache.hadoop.yarn.server.nodemanager.containermanager.
> launcher.ContainerLaunch.call(ContainerLaunch.java:81)
> at java.util.concurrent.FutureTask.run(FutureTask.java:262)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(
> ThreadPoolExecutor.java:1145)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(
> ThreadPoolExecutor.java:615)
> at java.lang.Thread.run(Thread.java:745)
>
>
> Container exited with a non-zero exit code 13
> .Failing this attempt.. Failing the application.
> APPID=application_1427124496008_0028
>
>
>
> --
> View this message in context: http://apache-spark-user-list.
> 1001560.n3.nabble.com/issue-while-submitting-Spark-Job-as-
> master-yarn-cluster-tp0.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> -
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>