Github user vanzin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/19760#discussion_r151538808
  
    --- Diff: core/src/main/scala/org/apache/spark/SparkConf.scala ---
    @@ -663,8 +663,10 @@ private[spark] object SparkConf extends Logging {
           AlternateConfig("spark.yarn.jar", "2.0")),
         "spark.yarn.access.hadoopFileSystems" -> Seq(
           AlternateConfig("spark.yarn.access.namenodes", "2.2")),
    -    "spark.maxRemoteBlockSizeFetchToMem" -> Seq(
    -      AlternateConfig("spark.reducer.maxReqSizeShuffleToMem", "2.3"))
    +    MAX_REMOTE_BLOCK_SIZE_FETCH_TO_MEM.key -> Seq(
    +      AlternateConfig("spark.reducer.maxReqSizeShuffleToMem", "2.3")),
    +    LISTENER_BUS_EVENT_QUEUE_CAPACITY.key -> Seq(
    +      AlternateConfig("spark.scheduler.listenerbus.eventqueue.size", 
"2.3"))
    --- End diff --
    
    Yes. But there's no way for the SQL module to deprecate configs any other 
way. Deprecation warnings are handled internally by SparkConf and the metadata 
needs to be available at the time the `SparkConf.set` call is made, which 
cannot happen if the config constant is declared in some other module (since 
the class holding the constant may not have been initialized).


---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to