[ https://issues.apache.org/jira/browse/SPARK-24942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16721841#comment-16721841 ]
Ilya Matiach commented on SPARK-24942: -------------------------------------- Would really like to see this resolved. It would be great if we could have barrier execution with dynamic allocation enabled. In the case that dynamic allocation is enabled, we should be able to automatically restart the job if resources are for some reason removed and allow the developer to decide whether to restart the job when resources are added (in their own code) to utilize more resources. For the latter case, I think many algorithms that would use something like barrier execution mode are iterative and so they should be able to save the current state and then restart when more resources are allocated. > Improve cluster resource management with jobs containing barrier stage > ---------------------------------------------------------------------- > > Key: SPARK-24942 > URL: https://issues.apache.org/jira/browse/SPARK-24942 > Project: Spark > Issue Type: Improvement > Components: Spark Core > Affects Versions: 2.4.0 > Reporter: Xingbo Jiang > Priority: Major > > https://github.com/apache/spark/pull/21758#discussion_r205652317 > We shall improve cluster resource management to address the following issues: > - With dynamic resource allocation enabled, it may happen that we acquire > some executors (but not enough to launch all the tasks in a barrier stage) > and later release them due to executor idle time expire, and then acquire > again. > - There can be deadlock with two concurrent applications. Each application > may acquire some resources, but not enough to launch all the tasks in a > barrier stage. And after hitting the idle timeout and releasing them, they > may acquire resources again, but just continually trade resources between > each other. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org