Github user gaborgsomogyi commented on a diff in the pull request:
https://github.com/apache/spark/pull/19893#discussion_r156385068
--- Diff: core/src/test/scala/org/apache/spark/SparkFunSuite.scala ---
@@ -34,12 +36,53 @@ abstract class SparkFunSuite
with Logging {
// scalastyle:on
+ val threadWhiteList = Set(
+ /**
+ * Netty related threads.
+ */
+ "netty.*",
+
+ /**
+ * A Single-thread singleton EventExecutor inside netty which creates
such threads.
+ */
+ "globalEventExecutor.*",
+
+ /**
+ * Netty creates such threads.
+ * Checks if a thread is alive periodically and runs a task when a
thread dies.
+ */
+ "threadDeathWatcher.*",
+
+ /**
+ * These threads are created by spark when internal RPC environment
initialized and later used.
--- End diff --
In the meantime analyzed the related threads and here are the findings:
/**
* During [[SparkContext]] creation
[[org.apache.spark.storage.BlockManager]]
* creates event loops. One is wrapped inside
* [[org.apache.spark.network.server.TransportServer]]
* the other one is inside
[[org.apache.spark.network.client.TransportClient]].
* The thread pools behind shut down asynchronously triggered by
[[SparkContext#close]].
* Manually checked and all of them stopped properly.
*/
"shuffle-client.*",
"shuffle-server.*"
Do you think in this situation they can be added to the whitelist?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]