Github user mccheah commented on the issue:

    https://github.com/apache/spark/pull/21067
  
    Looks like there's a lot of conflicts from the refactor that was just 
merged.
    
    In general though I don't think this buys us too much. The problem is that 
when the driver fails, you'll lose any and all state of progress done so far. 
We don't have a solid story for checkpointing streaming computation right now, 
and even if we did, you'll certainly lose all progress from batch jobs.
    
    Also, restarting the driver might not be the right thing to do in all 
cases. This assumes that it's always ok to have the driver re-launch itself 
automatically. But whether or not the driver should be relaunchable should be 
determined by the application submitter, and not necessarily done all the time. 
Can we make this behavior configurable?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to