hi,

I'm using the new 1.4.0 installation, and ran a job there. The job finished
and everything seems fine. When I enter the application, I can see that the
job is marked as KILLED:

Removed Executors

ExecutorID      Worker  Cores   Memory  State   Logs
0       worker-20150615080550-172.31.11.225-51630       4       10240   KILLED  
stdout stderr

when I enter the worker itself, I can see it marked as EXITED:


ExecutorID      Cores   State   Memory  Job Details     Logs
0       4       EXITED  10.0 GB 
ID: app-20150615080601-0000
Name: dev.app.name
User: root
stdout stderr

no interesting things in the stdout or stderr

Why is the job marked as KILLED in the application page?

this is the only job I ran, and the job that was in this executors. Also, by
checking the logs and output things seems to run fine

thanks, nizan



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Worker-is-KILLED-for-no-reason-tp23314.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to