[ https://issues.apache.org/jira/browse/SPARK-31437?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Thomas Graves updated SPARK-31437: ---------------------------------- Affects Version/s: (was: 3.0.0) 3.1.0 > Try assigning tasks to existing executors by which required resources in > ResourceProfile are satisfied > ------------------------------------------------------------------------------------------------------ > > Key: SPARK-31437 > URL: https://issues.apache.org/jira/browse/SPARK-31437 > Project: Spark > Issue Type: Improvement > Components: Scheduler > Affects Versions: 3.1.0 > Reporter: Hongze Zhang > Priority: Major > > By the change in [PR|https://github.com/apache/spark/pull/27773] of > SPARK-29154, submitted tasks are scheduled onto executors only if resource > profile IDs strictly match. As a result Spark always starts new executors for > customized ResourceProfiles. > This limitation makes working with process-local jobs unfriendly. E.g. Task > cores has been increased from 1 to 4 in a new stage, and executor has 8 > slots, it is expected that 2 new tasks can be run on the existing executor > but Spark starts new executors for new ResourceProfile. The behavior is > unnecessary. -- This message was sent by Atlassian Jira (v8.3.4#803005) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org