Github user gaborgsomogyi commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19893#discussion_r155600929
  
    --- Diff: core/src/test/scala/org/apache/spark/SparkFunSuite.scala ---
    @@ -34,12 +36,53 @@ abstract class SparkFunSuite
       with Logging {
     // scalastyle:on
     
    +  val threadWhiteList = Set(
    +    /**
    +     * Netty related threads.
    +     */
    +    "netty.*",
    +
    +    /**
    +     * A Single-thread singleton EventExecutor inside netty which creates 
such threads.
    +     */
    +    "globalEventExecutor.*",
    +
    +    /**
    +     * Netty creates such threads.
    +     * Checks if a thread is alive periodically and runs a task when a 
thread dies.
    +     */
    +    "threadDeathWatcher.*",
    +
    +    /**
    +     * These threads are created by spark when internal RPC environment 
initialized and later used.
    --- End diff --
    
    These threads started when SparkContext created and remain there. These can 
be destroyed when sc.stop() called but I've seen this pattern rarely. If we 
would like to follow this, huge amount of tests should be modified and I don't 
see the ROI. On the other hand not whitelisting them would trigger false 
positives when context tried to be reused like you mentioned in case of hive. 
Ideas/opinions? Shall we drop them?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to