RE: Container exited with a non-zero exit code 1-SparkJOb on YARN

2016-01-21 Thread Siddharth Ubale
Hi Wellington,

Thanks for the reply.

I have kept the default values for the below 2 features which have been 
mentioned.
The zip file is expected by the spark job in the spark staging folder in hdfs. 
None of the documentation has mentioned regarding this file.
Also, I have noticed one more thing that whenever yarn allocates containers on 
the machine from where I am running the code the spark job runs else
It always fails.

Thanks,
Siddharth Ubale



-Original Message-
From: Wellington Chevreuil [mailto:wellington.chevre...@gmail.com] 
Sent: Thursday, January 21, 2016 3:44 PM
To: Siddharth Ubale <siddharth.ub...@syncoms.com>
Subject: Re: Container exited with a non-zero exit code 1-SparkJOb on YARN

Hi,

For the memory issues, you might need to review current values for maximum 
allowed container memory on YARN configuration. Check values current defined 
for "yarn.nodemanager.resource.memory-mb" and 
"yarn.scheduler.maximum-allocation-mb" properties.

Regarding the file issue, is the file available on hdfs? Is there anything else 
writing/changing the file while the job runs?


> On 20 Jan 2016, at 12:29, Siddharth Ubale <siddharth.ub...@syncoms.com> wrote:
> 
> Hi,
>  
> I am running a Spark Job on the yarn cluster.
> The spark job is a spark streaming application which is reading JSON from a 
> kafka topic , inserting the JSON values to hbase tables via Phoenix , ands 
> then sending out certain messages to a websocket if the JSON satisfies a 
> certain criteria.
>  
> My cluster is a 3 node cluster with 24GB ram and 24 cores in total.
>  
> Now :
> 1. when I am submitting the job with 10GB memory, the application 
> fails saying memory is insufficient to run the job 2. The job is submitted 
> with 6G ram. However, it does not run successfully always.Common issues faced 
> :
> a. Container exited with a non-zero exit code 1 , and after 
> multiple such warning the job is finished.
> d. The failed job notifies that it was unable to find 
> a file in HDFS which is something like _hadoop_conf_xx.zip
>  
> Can someone pls let me know why am I seeing the above 2 issues.
>  
> Thanks,
> Siddharth Ubale,


-
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org



Re: Container exited with a non-zero exit code 1-SparkJOb on YARN

2016-01-20 Thread Shixiong(Ryan) Zhu
Could you share your log?

On Wed, Jan 20, 2016 at 5:37 AM, Siddharth Ubale <
siddharth.ub...@syncoms.com> wrote:

>
>
> Hi,
>
>
>
> I am running a Spark Job on the yarn cluster.
>
> The spark job is a spark streaming application which is reading JSON from
> a kafka topic , inserting the JSON values to hbase tables via Phoenix ,
> ands then sending out certain messages to a websocket if the JSON satisfies
> a certain criteria.
>
>
>
> My cluster is a 3 node cluster with 24GB ram and 24 cores in total.
>
>
>
> Now :
>
> 1. when I am submitting the job with 10GB memory, the application fails
> saying memory is insufficient to run the job
>
> 2. The job is submitted with 6G ram. However, it does not run successfully
> always.Common issues faced :
>
> a. Container exited with a non-zero exit code 1 , and
> after multiple such warning the job is finished.
>
> d. The failed job notifies that it was unable to find a
> file in HDFS which is something like _*hadoop_conf*_xx.zip
>
>
>
> Can someone pls let me know why am I seeing the above 2 issues.
>
>
>
> Thanks,
>
> Siddharth Ubale,
>
>
>


Container exited with a non-zero exit code 1-SparkJOb on YARN

2016-01-20 Thread Siddharth Ubale
Hi,

I am running a Spark Job on the yarn cluster.
The spark job is a spark streaming application which is reading JSON from a 
kafka topic , inserting the JSON values to hbase tables via Phoenix , ands then 
sending out certain messages to a websocket if the JSON satisfies a certain 
criteria.

My cluster is a 3 node cluster with 24GB ram and 24 cores in total.

Now :
1. when I am submitting the job with 10GB memory, the application fails saying 
memory is insufficient to run the job
2. The job is submitted with 6G ram. However, it does not run successfully 
always.Common issues faced :
a. Container exited with a non-zero exit code 1 , and after 
multiple such warning the job is finished.
d. The failed job notifies that it was unable to find a file in 
HDFS which is something like _hadoop_conf_xx.zip

Can someone pls let me know why am I seeing the above 2 issues.

Thanks,
Siddharth Ubale,



Container exited with a non-zero exit code 1-SparkJOb on YARN

2016-01-20 Thread Siddharth Ubale

Hi,

I am running a Spark Job on the yarn cluster.
The spark job is a spark streaming application which is reading JSON from a 
kafka topic , inserting the JSON values to hbase tables via Phoenix , ands then 
sending out certain messages to a websocket if the JSON satisfies a certain 
criteria.

My cluster is a 3 node cluster with 24GB ram and 24 cores in total.

Now :
1. when I am submitting the job with 10GB memory, the application fails saying 
memory is insufficient to run the job
2. The job is submitted with 6G ram. However, it does not run successfully 
always.Common issues faced :
a. Container exited with a non-zero exit code 1 , and after 
multiple such warning the job is finished.
d. The failed job notifies that it was unable to find a file in 
HDFS which is something like _hadoop_conf_xx.zip

Can someone pls let me know why am I seeing the above 2 issues.

Thanks,
Siddharth Ubale,