ivoson commented on code in PR #37268:
URL: https://github.com/apache/spark/pull/37268#discussion_r980222814


##########
docs/spark-standalone.md:
##########
@@ -467,7 +467,9 @@ worker during one single schedule iteration.
 
 # Stage Level Scheduling Overview
 
-Stage level scheduling is supported on Standalone when dynamic allocation is 
enabled. Currently, when the Master allocates executors for one application, it 
will schedule based on the order of the ResourceProfile ids for multiple 
ResourceProfiles. The ResourceProfile with smaller id will be scheduled 
firstly. Normally this won’t matter as Spark finishes one stage before starting 
another one, the only case this might have an affect is in a job server type 
scenario, so its something to keep in mind. For scheduling, we will only take 
executor memory and executor cores from built-in executor resources and all 
other custom resources from a ResourceProfile, other built-in executor 
resources such as offHeap and memoryOverhead won't take any effect. The base 
default profile will be created based on the spark configs when you submit an 
application. Executor memory and executor cores from the base default profile 
can be propagated to custom ResourceProfiles, but all other custom resources c
 an not be propagated.
+Stage level scheduling is supported on Standalone:
+- When dynamic allocation is disabled: It allows users to specify different 
task resource requirements at stage level, and tasks with different task 
resource requirements will share executors with `DEFAULT_RESOURCE_PROFILE`.

Review Comment:
   Thanks for the suggestions.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to