HeartSaVioR commented on a change in pull request #25753: [SPARK-29046][SQL] 
Fix NPE in SQLConf.get when active SparkContext is stopping
URL: https://github.com/apache/spark/pull/25753#discussion_r324424213
 
 

 ##########
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala
 ##########
 @@ -320,4 +321,22 @@ class SQLConfSuite extends QueryTest with 
SharedSparkSession {
     assert(e2.getMessage.contains("spark.sql.shuffle.partitions"))
   }
 
+  test("SPARK-29046: SQLConf.get shouldn't throw NPE when active SparkContext 
is stopping") {
+    // Logically, there's only one case SQLConf.get throws NPE: there's active 
SparkContext,
+    // but SparkContext is stopping - especially it sets dagScheduler as null.
+
+    val oldSparkContext = SparkContext.getActive
+    Utils.tryWithSafeFinally {
+      // this is necessary to set new SparkContext as active: it cleans up 
active SparkContext
+      oldSparkContext.foreach(_ => SparkContext.clearActiveContext())
+
+      val conf = new SparkConf().setAppName("test").setMaster("local")
+      LocalSparkContext.withSpark(new SparkContext(conf)) { sc =>
+        sc.dagScheduler = null
+        SQLConf.get
+      }
+    } {
+      oldSparkContext.orElse(Some(null)).foreach(SparkContext.setActiveContext)
 
 Review comment:
   Thanks for finding. I haven't noticed concurrent tests are executed, as 
there're many tests leveraging single object like SparkContext. I'll just 
exclude test code in new PR as I don't see other way to test this.
   
   Btw, do we run different options between PR and master build? I'm curious 
about the reason - as we tend to guess PR build pass as "OK".

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to