Github user davies commented on the pull request:
https://github.com/apache/spark/pull/9383#issuecomment-153118810
After some benchmark, realized that using hashcode as prefix in timsort
will cause regression in timsort and snappy compression (especially for
aggregation after join, the order of records will become random). I will revert
that part.
benchmark code:
```
sqlContext.setConf("spark.sql.shuffle.partitions", "1")
N = 1<<25
M = 1<<20
df = sqlContext.range(N).selectExpr("id", "repeat(id, 2) as s")
df.show()
df2 = df.select(df.id.alias('id2'), df.s.alias('s2'))
j = df.join(df2, df.id==df2.id2).groupBy(df.s).max("id", "id2")
n = j.count()
```
Another interesting finding is that Snappy will slowdown the spilling by
50% of end-to-end time, LZ4 will be faster than Snappy, but still 10% slower
than not-compressed. Should we use `false` as the default value for
`spark.shuffle.spill.compress`?(PS: tested on Mac with SSD, it may not be true
on HD)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]