[ 
https://issues.apache.org/jira/browse/SPARK-40303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17599616#comment-17599616
 ] 

Kris Mok commented on SPARK-40303:
----------------------------------

Nice findings [~LuciferYang]!

{quote}
After some experiments, I found when the number of parameters exceeds 50, the 
performance of the case in the Jira description will significant deterioration.
{quote}
Sounds reasonable. Note that in a STD compilation, the only things that need to 
be live at the method entry are the method parameters (both implicit ones like 
{{this}}, and explicit ones); however, for an OSR compilation, it would be all 
of the parameters/local variables that are live at the loop entry point, so in 
this case both the {{doConsume}} parameters and the local variables contribute 
to the problem.

Just FYI I have an old write on PrintCompilation and OSR here: 
https://gist.github.com/rednaxelafx/1165804#file-notes-md
(Gee, just realized that was from 11 years ago.......)

{quote}
maybe try to make the input parameters of the `doConsume` method fixed length 
will help, such as using a List or Array
{quote}
Welp, hoisting the parameters into an Arguments object is rather common in 
"code splitting" in code generators. Since we're already doing codegen, it's 
possible to generate tailor-made Arguments classes to retain the type 
information. Using List/Array would require extra boxing for primitive types 
and it's less ideal.
(An array-based box is already used in Spark SQL's codegen in the form of the 
{{references}} array. Indeed the type info is lost on the interface level and 
you'd have to do a cast when you get data out of it. It's still usable though.)

> The performance will be worse after codegen
> -------------------------------------------
>
>                 Key: SPARK-40303
>                 URL: https://issues.apache.org/jira/browse/SPARK-40303
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.4.0
>            Reporter: Yuming Wang
>            Priority: Major
>
> {code:scala}
> import org.apache.spark.benchmark.Benchmark
> val dir = "/tmp/spark/benchmark"
> val N = 2000000
> val columns = Range(0, 100).map(i => s"id % $i AS id$i")
> spark.range(N).selectExpr(columns: _*).write.mode("Overwrite").parquet(dir)
> // Seq(1, 2, 5, 10, 15, 25, 40, 60, 100)
> Seq(60).foreach{ cnt =>
>   val selectExps = columns.take(cnt).map(_.split(" ").last).map(c => 
> s"count(distinct $c)")
>   val benchmark = new Benchmark("Benchmark count distinct", N, minNumIters = 
> 1)
>   benchmark.addCase(s"$cnt count distinct with codegen") { _ =>
>     withSQLConf(
>       "spark.sql.codegen.wholeStage" -> "true",
>       "spark.sql.codegen.factoryMode" -> "FALLBACK") {
>       spark.read.parquet(dir).selectExpr(selectExps: 
> _*).write.format("noop").mode("Overwrite").save()
>     }
>   }
>   benchmark.addCase(s"$cnt count distinct without codegen") { _ =>
>     withSQLConf(
>       "spark.sql.codegen.wholeStage" -> "false",
>       "spark.sql.codegen.factoryMode" -> "NO_CODEGEN") {
>       spark.read.parquet(dir).selectExpr(selectExps: 
> _*).write.format("noop").mode("Overwrite").save()
>     }
>   }
>   benchmark.run()
> }
> {code}
> {noformat}
> Java HotSpot(TM) 64-Bit Server VM 1.8.0_281-b09 on Mac OS X 10.15.7
> Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
> Benchmark count distinct:                 Best Time(ms)   Avg Time(ms)   
> Stdev(ms)    Rate(M/s)   Per Row(ns)   Relative
> ------------------------------------------------------------------------------------------------------------------------
> 60 count distinct with codegen                   628146         628146        
>    0          0.0      314072.8       1.0X
> 60 count distinct without codegen                147635         147635        
>    0          0.0       73817.5       4.3X
> {noformat}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to