Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19893#discussion_r155602584
  
    --- Diff: core/src/test/scala/org/apache/spark/SparkFunSuite.scala ---
    @@ -34,12 +36,53 @@ abstract class SparkFunSuite
       with Logging {
     // scalastyle:on
     
    +  val threadWhiteList = Set(
    +    /**
    +     * Netty related threads.
    +     */
    +    "netty.*",
    +
    +    /**
    +     * A Single-thread singleton EventExecutor inside netty which creates 
such threads.
    +     */
    +    "globalEventExecutor.*",
    +
    +    /**
    +     * Netty creates such threads.
    +     * Checks if a thread is alive periodically and runs a task when a 
thread dies.
    +     */
    +    "threadDeathWatcher.*",
    +
    +    /**
    +     * These threads are created by spark when internal RPC environment 
initialized and later used.
    --- End diff --
    
    > These can be destroyed when sc.stop() called but I've seen this pattern 
rarely.
    
    Not sure I understand. Tests should be stopping their contexts (sql and 
hive tests notwithstanding), and if these threads are not going away when that 
happens, it's a bug.
    
    After all, that's the kind of thing I expect this code to be catching.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to