[ 
https://issues.apache.org/jira/browse/SPARK-12554?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lijie Xu updated SPARK-12554:
-----------------------------
    Description: 
In scheduleExecutorsOnWorker() in Master.scala,
{{val keepScheduling = coresToAssign >= minCoresPerExecutor}} should be changed 
to {{val keepScheduling = coresToAssign > 0}}

Case 1: 
Suppose that an app's requested cores is 10 (i.e., {{spark.cores.max = 10}}) 
and app.coresPerExecutor is 4 (i.e., {{spark.executor.cores = 4}}). 

After allocating two executors (each has 4 cores) to this app, the 
{{app.coresToAssign = 2}} and {{minCoresPerExecutor = coresPerExecutor = 4}}, 
so {{keepScheduling = false}} and no extra executor will be allocated to this 
app. If {{spark.scheduler.minRegisteredResourcesRatio}} is set to a large 
number (e.g., > 0.8 in this case), the app will hang and never finish.

Case 2: if a small app's coresPerExecutor is larger than its requested cores 
(e.g., {{spark.cores.max = 10}}, {{spark.executor.cores = 16}}), {{val 
keepScheduling = coresToAssign >= minCoresPerExecutor}} is always FALSE. As a 
result, this app will never get an executor to run.



  was:
In scheduleExecutorsOnWorker() in Master.scala,
*val keepScheduling = coresToAssign >= minCoresPerExecutor* should be changed 
to *val keepScheduling = coresToAssign > 0*

Case 1: 
Suppose that an app's requested cores is 10 (i.e., spark.cores.max = 10) and 
app.coresPerExecutor is 4 (i.e., spark.executor.cores = 4). 

After allocating two executors (each has 4 cores) to this app, the 
*app.coresToAssign = 2* and *minCoresPerExecutor = coresPerExecutor = 4*, so 
*keepScheduling = false* and no extra executor will be allocated to this app. 
If *spark.scheduler.minRegisteredResourcesRatio* is set to a large number 
(e.g., > 0.8 in this case), the app will hang and never finish.

Case 2: if a small app's coresPerExecutor is larger than its requested cores 
(e.g., spark.cores.max = 10, spark.executor.cores = 16), *val keepScheduling = 
coresToAssign >= minCoresPerExecutor* is always FALSE. As a result, this app 
will never get an executor to run.




> Standalone app scheduler will hang when app.coreToAssign < minCoresPerExecutor
> ------------------------------------------------------------------------------
>
>                 Key: SPARK-12554
>                 URL: https://issues.apache.org/jira/browse/SPARK-12554
>             Project: Spark
>          Issue Type: Bug
>          Components: Deploy, Scheduler
>    Affects Versions: 1.5.2
>            Reporter: Lijie Xu
>
> In scheduleExecutorsOnWorker() in Master.scala,
> {{val keepScheduling = coresToAssign >= minCoresPerExecutor}} should be 
> changed to {{val keepScheduling = coresToAssign > 0}}
> Case 1: 
> Suppose that an app's requested cores is 10 (i.e., {{spark.cores.max = 10}}) 
> and app.coresPerExecutor is 4 (i.e., {{spark.executor.cores = 4}}). 
> After allocating two executors (each has 4 cores) to this app, the 
> {{app.coresToAssign = 2}} and {{minCoresPerExecutor = coresPerExecutor = 4}}, 
> so {{keepScheduling = false}} and no extra executor will be allocated to this 
> app. If {{spark.scheduler.minRegisteredResourcesRatio}} is set to a large 
> number (e.g., > 0.8 in this case), the app will hang and never finish.
> Case 2: if a small app's coresPerExecutor is larger than its requested cores 
> (e.g., {{spark.cores.max = 10}}, {{spark.executor.cores = 16}}), {{val 
> keepScheduling = coresToAssign >= minCoresPerExecutor}} is always FALSE. As a 
> result, this app will never get an executor to run.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to