I'm using Spark 1.0.0 and I'd like to kill a job running in cluster mode,
which means the driver is not running on local node.

So how can I kill such a job? Is there a command like "hadoop job -kill
<job-id>" which kills a running MapReduce job ?

Thanks

Reply via email to