Suppose a spark job has two stages with independent dependencies (they do not
depend on each other) and they are submitted concurrently/simultaneously (as
Tasksets) by the DAG scheduler to the task scheduler. Can someone give more
detailed insight on how the cores available on executors are distributed
among the two ready stages/Tasksets? More precisely:

-       Tasks from the second taskset/stage are not launched until the tasks of
the previous taskset/stage complete? or,

-       Tasks from both tasksets can be launched (granted cores) simultaneously
depending depending on the logic implemented by the taskscheduler e.g.
FIFO/Fair?

In general, suppose a new resource offer has triggered the taskscheduler to
make decision to select some ready tasks (out of n ready taksets) for
execution? what is the logic implemented by the taskscheduler in such case?
Thanks.




--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-stage-concurrency-tp27529.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to