ivoson commented on code in PR #36716:
URL: https://github.com/apache/spark/pull/36716#discussion_r902806648


##########
docs/spark-standalone.md:
##########
@@ -455,6 +455,14 @@ if the worker has enough cores and memory. Otherwise, each 
executor grabs all th
 on the worker by default, in which case only one executor per application may 
be launched on each
 worker during one single schedule iteration.
 
+# Stage Level Scheduling Overview
+
+Stage level scheduling is supported on Standalone when dynamic allocation is 
enabled. Currently, when Master allocates executors for one application, will 
schedule based on the order of the ResourceProfile ids for multiple 
ResourceProfiles. The ResourceProfile with smaller id will be scheduled 
firstly. Normally this won’t matter as Spark finishes one stage before starting 
another one, the only case this might have an affect is in a job server type 
scenario, so its something to keep in mind. For scheduling, we will only take 
executor memory and executor cores from built-in executor resources and all 
other custom resources from a ResourceProfile, other built-in executor 
resources such as offHeap and memoryOverhead won't take any effect. The base 
default profile will be created based on the spark configs when you submit an 
application. Executor memory and executor cores from the base default profile 
can be propagated to custom ResourceProfiles, but all other custom resources 
can not 
 be propagated.

Review Comment:
   Thanks, will update the doc here.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to