Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/12067#discussion_r59648732
--- Diff:
sql/core/src/test/scala/org/apache/spark/sql/DatasetBenchmark.scala ---
@@ -117,30 +160,45 @@ object DatasetBenchmark {
val sparkContext = new SparkContext("local[*]", "Dataset benchmark")
val sqlContext = new SQLContext(sparkContext)
- val numRows = 10000000
+ val numRows = 100000000
val numChains = 10
val benchmark = backToBackMap(sqlContext, numRows, numChains)
val benchmark2 = backToBackFilter(sqlContext, numRows, numChains)
+ val benchmark3 = aggregate(sqlContext, numRows)
/*
Java HotSpot(TM) 64-Bit Server VM 1.8.0_60-b27 on Mac OS X 10.11.4
Intel(R) Core(TM) i7-4960HQ CPU @ 2.60GHz
back-to-back map: Best/Avg Time(ms) Rate(M/s)
Per Row(ns) Relative
-------------------------------------------------------------------------------------------
- Dataset 902 / 995 11.1
90.2 1.0X
- DataFrame 132 / 167 75.5
13.2 6.8X
- RDD 216 / 237 46.3
21.6 4.2X
+ RDD 1935 / 2105 51.7
19.3 1.0X
+ DataFrame 756 / 799 132.3
7.6 2.6X
+ Dataset 7359 / 7506 13.6
73.6 0.3X
--- End diff --
ah sorry I updated the benchmark code before, let me run it again on master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]