SparkContext is not serializable and can't be just "sent across" ;)
2014-06-21 14:14 GMT+02:00 Mayur Rustagi <mayur.rust...@gmail.com>: > You can terminate job group from spark context, Youll have to send across > the spark context to your task. > On 21 Jun 2014 01:09, "Piotr Kołaczkowski" <pkola...@datastax.com> wrote: > >> If the task detects unrecoverable error, i.e. an error that we can't >> expect to fix by retrying nor moving the task to another node, how to stop >> the job / prevent Spark from retrying it? >> >> def process(taskContext: TaskContext, data: Iterator[T]) { >> ... >> >> if (unrecoverableError) { >> ??? // terminate the job immediately >> } >> ... >> } >> >> Somewhere else: >> rdd.sparkContext.runJob(rdd, something.process _) >> >> >> Thanks, >> Piotr >> >> >> -- >> Piotr Kolaczkowski, Lead Software Engineer >> pkola...@datastax.com >> >> http://www.datastax.com/ >> 777 Mariners Island Blvd., Suite 510 >> San Mateo, CA 94404 >> > -- Piotr Kolaczkowski, Lead Software Engineer pkola...@datastax.com http://www.datastax.com/ 3975 Freedom Circle Santa Clara, CA 95054, USA