Ngone51 commented on code in PR #36716:
URL: https://github.com/apache/spark/pull/36716#discussion_r893599247
##########
core/src/main/scala/org/apache/spark/deploy/master/Master.scala:
##########
@@ -725,26 +729,38 @@ private[deploy] class Master(
*/
private def startExecutorsOnWorkers(): Unit = {
// Right now this is a very simple FIFO scheduler. We keep trying to fit
in the first app
- // in the queue, then the second app, etc.
+ // in the queue, then the second app, etc. And for each app, we will
schedule base on
+ // resource profiles also with a simple FIFO scheduler, resource profile
with smaller id
+ // first.
Review Comment:
> Currently, we don't have a good way to infer about the order of requests
for different resource profiles.
I actually means the order of receiving the request in Master, although I
know it could be out of order compared to the request sender (driver) due to
asynchronous RPC framework. But after a second thinking, requests come from the
pending tasks, which are able to be scheduled in parallel as long as there're
enough resources. So it doesn't really matter which resource profile should be
used to launch executors. Schedule by ordered resource profile ids should be
enough.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]