kevin85421 opened a new pull request, #37384:
URL: https://github.com/apache/spark/pull/37384
<!--
Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, please read our contributor guidelines:
https://spark.apache.org/contributing.html
2. Ensure you have added or run the appropriate tests for your PR:
https://spark.apache.org/developer-tools.html
3. If the PR is unfinished, add '[WIP]' in your PR title, e.g.,
'[WIP][SPARK-XXXX] Your PR title ...'.
4. Be sure to keep the PR description updated to reflect all changes.
5. Please write your PR title to summarize what this PR proposes.
6. If possible, provide a concise example to reproduce the issue for a
faster review.
7. If you want to add a new configuration, please read the guideline first
for naming configurations in
'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'.
8. If you want to add or modify an error type or message, please read the
guideline first in
'core/src/main/resources/error/README.md'.
-->
### What changes were proposed in this pull request?

A. Non-Task Failure: Driver does not send any LaunchTask message to
Executor, and thus there is no running task on Executor. It is impossible to be
Task Failure.
B. Non-Task Failure: If a RPC failure happens before receiving the
StatusUpdate message from Executor, it is Non-Task Failure because the task
does not start to run in Executor.
C. Task Failure: If there is any running task in the Executor, we will
recognize the RPC failure as Task Failure.
D. Non-Task Failure: There is no running task on the Executor when all tasks
on the Executor are in finished states (FINISHED, FAILED, KILLED, LOST). Hence,
the RPC failures in this phase are Network Failure.
### Why are the changes needed?
There are two possible reasons, including Network Failure and Task Failure,
to make RPC failures.
(1) Task Failure: The network is good, but the task causes the executor's
JVM crash. Hence, RPC fails.
(2) Network Failure: The executor works well, but the network between Driver
and Executor is broken. Hence, RPC fails.
We should handle these two different kinds of failure in different ways.
First, if the failure is Task Failure, we should increment the variable
`numFailures`. If the value of `numFailures` is larger than a threshold, Spark
will label the job failed. Second, if the failure is Network Failure, we will
not increment the variable `numFailures`. We will just assign the task to a new
executor. Hence, the job will not be recognized as failed due to Network
Failure.
However, currently, Spark recognizes every RPC failure as Task Failure.
Hence, this PR aims to categorize RPC failures into two categories, Task
Failure and Network Failure.
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
```
build/sbt "core/testOnly *TaskSchedulerImplSuite -- -z SPARK-39955"
build/sbt "core/testOnly *TaskSetManagerSuite"
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]