[
https://issues.apache.org/jira/browse/SPARK-20184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15965597#comment-15965597
]
Fei Wang edited comment on SPARK-20184 at 4/12/17 9:21 AM:
-----------------------------------------------------------
try this :
1. create table
{code}
val df = (1 to 500000).map(x => (x.toString, x.toString, x, x, x, x, x, x, x,
x, x, x, x, x, x, x, x, x, x, x, x, x)).toDF("dim_1", "dim_2", "c1", "c2",
"c3", "c4", "c5", "c6", "c7", "c8", "c9", "c10","c11", "c12", "c13", "c14",
"c15", "c16", "c17", "c18", "c19", "c20")
df.write.saveAsTable("sum_table_50w_3")
{code}
2. query the table
select dim_1, dim_2, sum(c1), sum(c2), sum(c3), sum(c4), sum(c5), sum(c6),
sum(c7), sum(c8), sum(c9), sum(c10), sum(c11), sum(c12), sum(c13), sum(c14),
sum(c15), sum(c16), sum(c17), sum(c18), sum(c19), sum(c20) from sum_table_50w_3
group by dim_1, dim_2 limit 100
was (Author: scwf):
try this :
1. create table
[code]
val df = (1 to 500000).map(x => (x.toString, x.toString, x, x, x, x, x, x, x,
x, x, x, x, x, x, x, x, x, x, x, x, x)).toDF("dim_1", "dim_2", "c1", "c2",
"c3", "c4", "c5", "c6", "c7", "c8", "c9", "c10","c11", "c12", "c13", "c14",
"c15", "c16", "c17", "c18", "c19", "c20")
df.write.saveAsTable("sum_table_50w_3")
df.write.format("csv").saveAsTable("sum_table_50w_1")
[code]
2. query the table
select dim_1, dim_2, sum(c1), sum(c2), sum(c3), sum(c4), sum(c5), sum(c6),
sum(c7), sum(c8), sum(c9), sum(c10), sum(c11), sum(c12), sum(c13), sum(c14),
sum(c15), sum(c16), sum(c17), sum(c18), sum(c19), sum(c20) from sum_table_50w_3
group by dim_1, dim_2 limit 100
> performance regression for complex/long sql when enable whole stage codegen
> ---------------------------------------------------------------------------
>
> Key: SPARK-20184
> URL: https://issues.apache.org/jira/browse/SPARK-20184
> Project: Spark
> Issue Type: Improvement
> Components: SQL
> Affects Versions: 1.6.0, 2.1.0
> Reporter: Fei Wang
>
> The performance of following SQL get much worse in spark 2.x in contrast
> with codegen off.
> SELECT
> sum(COUNTER_57)
> ,sum(COUNTER_71)
> ,sum(COUNTER_3)
> ,sum(COUNTER_70)
> ,sum(COUNTER_66)
> ,sum(COUNTER_75)
> ,sum(COUNTER_69)
> ,sum(COUNTER_55)
> ,sum(COUNTER_63)
> ,sum(COUNTER_68)
> ,sum(COUNTER_56)
> ,sum(COUNTER_37)
> ,sum(COUNTER_51)
> ,sum(COUNTER_42)
> ,sum(COUNTER_43)
> ,sum(COUNTER_1)
> ,sum(COUNTER_76)
> ,sum(COUNTER_54)
> ,sum(COUNTER_44)
> ,sum(COUNTER_46)
> ,DIM_1
> ,DIM_2
> ,DIM_3
> FROM aggtable group by DIM_1, DIM_2, DIM_3 limit 100;
> Num of rows of aggtable is about 35000000.
> whole stage codegen on(spark.sql.codegen.wholeStage = true): 40s
> whole stage codegen off(spark.sql.codegen.wholeStage = false): 6s
> After some analysis i think this is related to the huge java method(a java
> method of thousand lines) which generated by codegen.
> And If i config -XX:-DontCompileHugeMethods the performance get much
> better(about 7s).
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]