Github user cloud-fan commented on the issue: https://github.com/apache/spark/pull/19981 > Not sure why that would be an issue - or rather, why that's different from the what's always been the case. It's possible that people writing Spark applications with Spark SQL dependency, but not using SQL functions(just create `SparkContext`). This can happen if someone builds a lib based on Spark and Spark SQL, but the users only use the non-SQL APIs. > What do you think about either of my suggestions to simplify the code? I do think we should only create the SQL tab when users actually use SQL functions. And the same thing should apply to SQL listener. And previously they were consistent: we register the listener and setup the UI when creating `SparkSession`. But now they are not: we register the listener during `SparkContext` creation, and setup the UI after first SQL execution starts. No offense, but I would reject that PR if I was reviewing it, because of this behavior change.
--- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org