[ 
https://issues.apache.org/jira/browse/SPARK-20184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15976251#comment-15976251
 ] 

Takeshi Yamamuro commented on SPARK-20184:
------------------------------------------

When #aggregated columns gets large, it seems we get the possibly similar 
regression even in spark-shell
{code}
./bin/spark-shell --master local[1] --conf spark.driver.memory=2g --conf 
spark.sql.shuffle.partitions=1 -v

def timer[R](f: => {}): Unit = {
  val count = 9
  val iters = (0 until count).map { i =>
    val t0 = System.nanoTime()
    f
    val t1 = System.nanoTime()
    val elapsed = t1 - t0 + 0.0
    println(s"#$i: ${elapsed / 1000000000.0}")
    elapsed
  }
  println("Elapsed time: " + ((iters.sum / count) / 1000000000.0) + "s")
}

val numCols = 80
val t = s"(SELECT id AS key1, id AS key2, ${((0 until numCols).map(i => s"id AS 
c$i")).mkString(", ")} FROM range(0, 100000, 1, 1))"
val sqlStr = s"SELECT key1, key2, ${((0 until numCols).map(i => 
s"SUM(c$i)")).mkString(", ")} FROM $t GROUP BY key1, key2 LIMIT 100"

// Elapsed time: 2.3084404905555553s
sql("SET spark.sql.codegen.wholeStage=true")
timer { sql(sqlStr).collect }

// Elapsed time: 0.527486733s
sql("SET spark.sql.codegen.wholeStage=false")
timer { sql(sqlStr).collect }
{code}

> performance regression for complex/long sql when enable whole stage codegen
> ---------------------------------------------------------------------------
>
>                 Key: SPARK-20184
>                 URL: https://issues.apache.org/jira/browse/SPARK-20184
>             Project: Spark
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 1.6.0, 2.1.0
>            Reporter: Fei Wang
>
> The performance of following SQL get much worse in spark 2.x  in contrast 
> with codegen off.
>     SELECT
>        sum(COUNTER_57) 
>         ,sum(COUNTER_71) 
>         ,sum(COUNTER_3)  
>         ,sum(COUNTER_70) 
>         ,sum(COUNTER_66) 
>         ,sum(COUNTER_75) 
>         ,sum(COUNTER_69) 
>         ,sum(COUNTER_55) 
>         ,sum(COUNTER_63) 
>         ,sum(COUNTER_68) 
>         ,sum(COUNTER_56) 
>         ,sum(COUNTER_37) 
>         ,sum(COUNTER_51) 
>         ,sum(COUNTER_42) 
>         ,sum(COUNTER_43) 
>         ,sum(COUNTER_1)  
>         ,sum(COUNTER_76) 
>         ,sum(COUNTER_54) 
>         ,sum(COUNTER_44) 
>         ,sum(COUNTER_46) 
>         ,DIM_1 
>         ,DIM_2 
>               ,DIM_3
>     FROM aggtable group by DIM_1, DIM_2, DIM_3 limit 100;
> Num of rows of aggtable is about 35000000.
> whole stage codegen on(spark.sql.codegen.wholeStage = true):    40s
> whole stage codegen  off(spark.sql.codegen.wholeStage = false):    6s
> After some analysis i think this is related to the huge java method(a java 
> method of thousand lines) which generated by codegen.
> And If i config -XX:-DontCompileHugeMethods the performance get much 
> better(about 7s).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to