wineternity opened a new pull request, #38702:
URL: https://github.com/apache/spark/pull/38702
### What changes were proposed in this pull request?
Ignore the SparkListenerTaskEnd with Reason "Resubmitted" to avoid memory
leak
### Why are the changes needed?
For a long running spark thriftserver, LiveExecutor will be accumulated in
the deadExecutors HashMap and cause message event queue processing slowly
### Does this PR introduce _any_ user-facing change?
No
### How was this patch tested?
New UT Added
Test in thriftserver env
### The way to reproduce
I try to reproduce it in spark shell, but it is a little bit handy
1. start spark-shell , set spark.dynamicAllocation.maxExecutors=2 for
convient
` bin/spark-shell --driver-java-options
"-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=8006"`
2. run a job with shuffle
`sc.parallelize(1 to 1000, 10).map { x => Thread.sleep(1000) ; (x % 3, x)
}.reduceByKey((a, b) => a + b).collect()`
3. After some ShuffleMapTask finished, kill one or two executor to let tasks
resubmitted
4. check by heap dump or debug
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]