Hi,
I have a Spark resource scheduling order question when I read this code:

github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/master/Master.scala

In function schedule(), spark start drivers first, then start executors.
I’m wondering why we schedule in this order? Will the resource be wasted if a 
driver has been started but no resource for its executor?
Why don’t we start executors for the drivers have already running first?

Thanks,


Linfeng

Reply via email to