wangyum commented on pull request #35806:
URL: https://github.com/apache/spark/pull/35806#issuecomment-1067911655
```scala
import org.apache.spark.benchmark.Benchmark
val numRows = 1024 * 1024 * 50
spark.sql(s"CREATE TABLE t1 using parquet AS SELECT id AS a, id % ${numRows
/ 10000} AS b, id % ${numRows / 10000} AS c, id AS d FROM range(1, ${numRows}L,
1, 10)")
val benchmark = new Benchmark("Benchmark WholeStageCodegenExec", numRows,
minNumIters = 2)
Seq(0, 10000).foreach { threshold =>
benchmark.addCase(s"SELECT a, c, sum(b), sum(d) FROM t1 where a > 100
group by a, c and partialAggThreshold=$threshold") { _ =>
withSQLConf("spark.sql.aggregate.adaptivePartialAggregationThreshold" ->
threshold.toString) {
spark.sql("SELECT a, c, sum(b), sum(d) FROM t1 where a > 100 group by
a, c").write.format("noop").mode("Overwrite").save()
}
}
}
benchmark.run()
```
```
Java HotSpot(TM) 64-Bit Server VM 1.8.0_281-b09 on Mac OS X 10.15.7
Intel(R) Core(TM) i9-9980HK CPU @ 2.40GHz
Benchmark WholeStageCodegenExec:
Best Time(ms) Avg Time(ms) Stdev(ms) Rate(M/s) Per
Row(ns) Relative
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SELECT a, c, sum(b), sum(d) FROM t1 where a > 100 group by a, c and
partialAggThreshold=0 56519 57012 697
0.9 1078.0 1.0X
SELECT a, c, sum(b), sum(d) FROM t1 where a > 100 group by a, c and
partialAggThreshold=10000 41908 42369 653
1.3 799.3 1.3X
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]