LuciferYang commented on code in PR #47945:
URL: https://github.com/apache/spark/pull/47945#discussion_r1740282052
##########
core/src/main/scala/org/apache/spark/BarrierCoordinator.scala:
##########
@@ -51,7 +52,8 @@ private[spark] class BarrierCoordinator(
// TODO SPARK-25030 Create a Timer() in the mainClass submitted to
SparkSubmit makes it unable to
// fetch result, we shall fix the issue.
- private lazy val timer = new Timer("BarrierCoordinator barrier epoch
increment timer")
+ private lazy val timer = ThreadUtils.newSingleThreadScheduledExecutor(
Review Comment:
```
The reason was that there is a non-daemon Timer thread named
`BarrierCoordinator barrier epoch increment timer`, which prevented the driver
JVM from stopping.
```
@jshmchenxi Based on the description in
[SPARK-49479](https://issues.apache.org/jira/browse/SPARK-49479), do you think
the root cause of the problem is that the thread named `BarrierCoordinator
barrier epoch increment timer` is not a daemon thread? However, it seems that
the currently used `newSingleThreadScheduledExecutor` does not solve this
problem because both the `ThreadFactoryBuilder` and the scheduled `task`s are
not set to be `Daemon`. Maybe we should consider using
`newDaemonThreadPoolScheduledExecutor` instead? So, does the master branch also
have this problem? Can you provide a reproducible ut for the reviewer to verify
the problem?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]