If the task detects unrecoverable error, i.e. an error that we can't expect
to fix by retrying nor moving the task to another node, how to stop the job
/ prevent Spark from retrying it?
def process(taskContext: TaskContext, data: Iterator[T]) {
...
if (unrecoverableError) {
??? // terminate the job immediately
}
...
}
Somewhere else:
rdd.sparkContext.runJob(rdd, something.process _)
Thanks,
Piotr
--
Piotr Kolaczkowski, Lead Software Engineer
pkola...@datastax.com
http://www.datastax.com/
777 Mariners Island Blvd., Suite 510
San Mateo, CA 94404