Hi, guys

I tried to run job of spark streaming with kafka on YARN.
My business logic is very simple.
Just listen on kafka topic and  write dstream to hdfs on each batch
iteration.
After launching streaming job few hours, it works well. However suddenly
died by ResourceManager.
ResourceManager send SIGTERM to kafka receiver executor and other executors
also died without any log error messages.
No Outofmemory exception, no other exceptions…. just died.

Do you know about this issue??
Please help me to resolve this problem.

My environments are below described.

* Scala : 2.10
* Spark : 1.1
* YARN : 2.5.0-cdh5.2.1
* CDH : 5.2.1 version

Thanks in advance.

Have a nice day~



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/KafkaReceiver-executor-in-spark-streaming-job-on-YARN-suddenly-killed-by-ResourceManager-tp20945.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to