ivoson opened a new pull request, #36716: URL: https://github.com/apache/spark/pull/36716
### What changes were proposed in this pull request? Add support of stage level scheduling for standalone cluster. The feature is enabled when dynamic allocation enabled in standalone cluster. - `ResourceProfile` support creating default resource profile for standalone cluster, enable `None` value for executor cores which would take all available cores from a worker. - Enable the feature for standalone cluster in `ResourceProfileManager`. - `ApplicationDescription` take default resource profile as resource requirement description for an app. - Change standalone/master interface `RequestExecutors` to support request executors with multiple resource profiles. - `ApplicationInfo` to maintain the resource profiles and executors information, and help to make the schedule decision. - `Master` will schedule executors for one app based on resource profiles in a FIFO mode, a smaller resource profiles id will be scheduled firstly. - Add argument `resource profile id` while starting executors. ### Why are the changes needed? Stage level schedule already supported for yarn cluster and k8s, add support for standalone cluster. ### Does this PR introduce _any_ user-facing change? Since we modified the metadata of apps in master and also standalone client/master interface, so the new cluster can not work with an elder version of spark client. ### How was this patch tested? UT and manually test in a local standalone cluster. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
