[ 
https://issues.apache.org/jira/browse/SPARK-1946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihui updated SPARK-1946:
--------------------------

    Description: 
Because creating TaskSetManager and registering executors are asynchronous, if 
running job without enough executors, it will lead to some issues
* early stages' tasks run without preferred locality.
* the default parallelism in yarn is based on number of executors, 
* the number of intermediate files per node for shuffle (this can bring the 
node down btw)
* and amount of memory consumed on a node for rdd MEMORY persisted data (making 
the job fail if disk is not specified : like some of the mllib algos ?)
* and so on ...


A simple solution is sleeping few seconds in application, so that executors 
have enough time to register.

A better way is to make DAGScheduler submit stage after a few of executors have 
been registered by configuration properties.

\# submit stage only after successfully registered executors arrived the ratio, 
default value 0
spark.executor.registeredRatio = 0.8

\# whatever registeredRatio is arrived, submit stage after the 
maxRegisteredWaitingTime(millisecond), default value 10000
spark.executor.maxRegisteredWaitingTime = 5000

  was:
Because creating TaskSetManager and registering executors are asynchronous, in 
most situation, early stages' tasks run without preferred locality.

A simple solution is sleeping few seconds in application, so that executors 
have enough time to register.

A better way is to make DAGScheduler submit stage after a few of executors have 
been registered by configuration properties.

\# submit stage only after successfully registered executors arrived the ratio, 
default value 0
spark.executor.registeredRatio = 0.8

\# whatever registeredRatio is arrived, submit stage after the 
maxRegisteredWaitingTime(millisecond), default value 10000
spark.executor.maxRegisteredWaitingTime = 5000



> Submit stage after executors have been registered
> -------------------------------------------------
>
>                 Key: SPARK-1946
>                 URL: https://issues.apache.org/jira/browse/SPARK-1946
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.0.0
>            Reporter: Zhihui
>         Attachments: Spark Task Scheduler Optimization Proposal.pptx
>
>
> Because creating TaskSetManager and registering executors are asynchronous, 
> if running job without enough executors, it will lead to some issues
> * early stages' tasks run without preferred locality.
> * the default parallelism in yarn is based on number of executors, 
> * the number of intermediate files per node for shuffle (this can bring the 
> node down btw)
> * and amount of memory consumed on a node for rdd MEMORY persisted data 
> (making the job fail if disk is not specified : like some of the mllib algos 
> ?)
> * and so on ...
> A simple solution is sleeping few seconds in application, so that executors 
> have enough time to register.
> A better way is to make DAGScheduler submit stage after a few of executors 
> have been registered by configuration properties.
> \# submit stage only after successfully registered executors arrived the 
> ratio, default value 0
> spark.executor.registeredRatio = 0.8
> \# whatever registeredRatio is arrived, submit stage after the 
> maxRegisteredWaitingTime(millisecond), default value 10000
> spark.executor.maxRegisteredWaitingTime = 5000



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to