Github user sameeragarwal commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20434#discussion_r164676771
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -660,12 +660,10 @@ object SQLConf {
       val WHOLESTAGE_HUGE_METHOD_LIMIT = 
buildConf("spark.sql.codegen.hugeMethodLimit")
         .internal()
         .doc("The maximum bytecode size of a single compiled Java function 
generated by whole-stage " +
    -      "codegen. When the compiled function exceeds this threshold, " +
    -      "the whole-stage codegen is deactivated for this subtree of the 
current query plan. " +
    -      s"The default value is 
${CodeGenerator.DEFAULT_JVM_HUGE_METHOD_LIMIT} and " +
    -      "this is a limit in the OpenJDK JVM implementation.")
    --- End diff --
    
    nit: might want to still keep the last line around to indicate where the 
64k limit is coming from


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to