[ 
https://issues.apache.org/jira/browse/SPARK-15815?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15331132#comment-15331132
 ] 

Imran Rashid commented on SPARK-15815:
--------------------------------------

[~SuYan] is this the same as https://issues.apache.org/jira/browse/SPARK-15865 
?  The situation you are describing seems the same, though that doesn't only 
affect Dynamic Allocation.

Perhaps there is something better you can do with dynamic allocation as well, 
but maybe that is a different issue.  Take a look at the latest design doc I 
posted on SPARK-8426 to see if that addresses your concern.

> Hang while enable blacklistExecutor and DynamicExecutorAllocator 
> -----------------------------------------------------------------
>
>                 Key: SPARK-15815
>                 URL: https://issues.apache.org/jira/browse/SPARK-15815
>             Project: Spark
>          Issue Type: Bug
>          Components: Scheduler, Spark Core
>    Affects Versions: 1.6.1
>            Reporter: SuYan
>            Priority: Minor
>
> Enable BlacklistExecutor with some time large than 120s and enabled 
> DynamicAllocate with minExecutors = 0
> 1. Assume there only left 1 task running in Executor A, and other Executor 
> are all timeout.  
> 2. the task failed, so task will not scheduled in current Executor A due to 
> enable blacklistTime.
> 3. For ExecutorAllocateManager, it always request targetNumExecutor=1 
> executors, due to we already have executor A, so the oldTargetNumExecutor  == 
> targetNumExecutor = 1, so will never add more Executors...even if Executor A 
> was timeout.  it became endless request delta=0 executors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to