Any news regarding this setting? Is this expected behaviour? Is there some
other way I can have Spark fail-fast?

Thanks!

On Mon, Dec 9, 2013 at 4:35 PM, Grega Kešpret <gr...@celtra.com> wrote:

> Hi!
>
> I tried this (by setting spark.task.maxFailures to 1) and it still does
> not fail-fast. I started a job and after some time, I killed all JVMs
> running on one of the two workers. I was expecting Spark job to fail,
> however it re-fetched tasks to one of the two workers that was still alive
> and the job succeeded.
>
> Grega
>

Reply via email to