[ 
https://issues.apache.org/jira/browse/SPARK-1946?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhihui updated SPARK-1946:
--------------------------

    Description: 
Because creating TaskSetManager and registering executors are asynchronous, if 
running job without enough executors, it will lead to some issues
* early stages' tasks run without preferred locality.
* the default parallelism in yarn is based on number of executors, 
* the number of intermediate files per node for shuffle (this can bring the 
node down btw)
* and amount of memory consumed on a node for rdd MEMORY persisted data (making 
the job fail if disk is not specified : like some of the mllib algos ?)
* and so on ...
(thanks [~mridulm80] 's [comments | 
https://github.com/apache/spark/pull/900#issuecomment-45780405])

A simple solution is sleeping few seconds in application, so that executors 
have enough time to register.

A better way is to make DAGScheduler submit stage after a few of executors have 
been registered by configuration properties.

\# submit stage only after successfully registered executors arrived the ratio, 
default value 0 in Standalone mode and 0.9 in Yarn mode
spark.scheduler.minRegisteredRatio = 0.8

\# whatever registered number is arrived, submit stage after the 
maxRegisteredWaitingTime(millisecond), default value 10000
spark.scheduler.maxRegisteredWaitingTime = 5000

  was:
Because creating TaskSetManager and registering executors are asynchronous, if 
running job without enough executors, it will lead to some issues
* early stages' tasks run without preferred locality.
* the default parallelism in yarn is based on number of executors, 
* the number of intermediate files per node for shuffle (this can bring the 
node down btw)
* and amount of memory consumed on a node for rdd MEMORY persisted data (making 
the job fail if disk is not specified : like some of the mllib algos ?)
* and so on ...
(thanks [~mridulm80] 's [comments | 
https://github.com/apache/spark/pull/900#issuecomment-45780405])

A simple solution is sleeping few seconds in application, so that executors 
have enough time to register.

A better way is to make DAGScheduler submit stage after a few of executors have 
been registered by configuration properties.

\# submit stage only after successfully registered executors arrived the 
number, default value 0
spark.executor.minRegisteredNum = 20

\# whatever registeredRatio is arrived, submit stage after the 
maxRegisteredWaitingTime(millisecond), default value 10000
spark.executor.maxRegisteredWaitingTime = 5000


> Submit stage after executors have been registered
> -------------------------------------------------
>
>                 Key: SPARK-1946
>                 URL: https://issues.apache.org/jira/browse/SPARK-1946
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 1.0.0
>            Reporter: Zhihui
>         Attachments: Spark Task Scheduler Optimization Proposal.pptx
>
>
> Because creating TaskSetManager and registering executors are asynchronous, 
> if running job without enough executors, it will lead to some issues
> * early stages' tasks run without preferred locality.
> * the default parallelism in yarn is based on number of executors, 
> * the number of intermediate files per node for shuffle (this can bring the 
> node down btw)
> * and amount of memory consumed on a node for rdd MEMORY persisted data 
> (making the job fail if disk is not specified : like some of the mllib algos 
> ?)
> * and so on ...
> (thanks [~mridulm80] 's [comments | 
> https://github.com/apache/spark/pull/900#issuecomment-45780405])
> A simple solution is sleeping few seconds in application, so that executors 
> have enough time to register.
> A better way is to make DAGScheduler submit stage after a few of executors 
> have been registered by configuration properties.
> \# submit stage only after successfully registered executors arrived the 
> ratio, default value 0 in Standalone mode and 0.9 in Yarn mode
> spark.scheduler.minRegisteredRatio = 0.8
> \# whatever registered number is arrived, submit stage after the 
> maxRegisteredWaitingTime(millisecond), default value 10000
> spark.scheduler.maxRegisteredWaitingTime = 5000



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to