dongjoon-hyun commented on a change in pull request #25753: [SPARK-29046][SQL] 
Fix NPE in SQLConf.get when active SparkContext is stopping
URL: https://github.com/apache/spark/pull/25753#discussion_r324410880
 
 

 ##########
 File path: 
sql/core/src/test/scala/org/apache/spark/sql/internal/SQLConfSuite.scala
 ##########
 @@ -320,4 +321,22 @@ class SQLConfSuite extends QueryTest with 
SharedSparkSession {
     assert(e2.getMessage.contains("spark.sql.shuffle.partitions"))
   }
 
+  test("SPARK-29046: SQLConf.get shouldn't throw NPE when active SparkContext 
is stopping") {
+    // Logically, there's only one case SQLConf.get throws NPE: there's active 
SparkContext,
+    // but SparkContext is stopping - especially it sets dagScheduler as null.
+
+    val oldSparkContext = SparkContext.getActive
+    Utils.tryWithSafeFinally {
+      // this is necessary to set new SparkContext as active: it cleans up 
active SparkContext
+      oldSparkContext.foreach(_ => SparkContext.clearActiveContext())
+
+      val conf = new SparkConf().setAppName("test").setMaster("local")
+      LocalSparkContext.withSpark(new SparkContext(conf)) { sc =>
+        sc.dagScheduler = null
+        SQLConf.get
+      }
+    } {
+      oldSparkContext.orElse(Some(null)).foreach(SparkContext.setActiveContext)
 
 Review comment:
   Yes. I've been investigating this and the other commits. But, this is the 
usual suspect given the error. Since there has been broken over 2 days and it 
seems that we cannot fix this during weekend, I'll revert this.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to