[
https://issues.apache.org/jira/browse/SPARK-51666?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Wenchen Fan resolved SPARK-51666.
---------------------------------
Fix Version/s: 4.0.0
Resolution: Fixed
Issue resolved by pull request 50459
[https://github.com/apache/spark/pull/50459]
> Fix sparkStageCompleted executorRunTime metric calculation
> -----------------------------------------------------------
>
> Key: SPARK-51666
> URL: https://issues.apache.org/jira/browse/SPARK-51666
> Project: Spark
> Issue Type: Bug
> Components: Spark Core
> Affects Versions: 4.1.0
> Reporter: Weichen Xu
> Assignee: Weichen Xu
> Priority: Major
> Labels: pull-request-available
> Fix For: 4.0.0
>
>
> Fix sparkStageCompleted executorRunTime metric calculation:
> In case of when a spark task uses multiple CPU’s, the CPU seconds should
> capture the total execution seconds across all CPU’s. i.e. if a stage set
> cpus-of-task to be 48, if we used 10 seconds on each CPU, the total CPU
> seconds for that stage should be 10 seconds X 1 Tasks X 48 CPU = 480
> CPU-seconds. If another task only used 1 CPU then its total CPU seconds is 10
> seconds X 1 CPU = 10 CPU-Seconds.
> *This is very important fix since spark introduces stage level scheduling (so
> that different stage tasks are configured with different number of CPUs) ,
> without this fix, in stage level scheduling case, the metric calculation is
> wrong.*
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]