Merge pull request #200 from mateiz/hash-fix

AppendOnlyMap fixes

- Chose a more random reshuffling step for values returned by Object.hashCode 
to avoid some long chaining that was happening for consecutive integers (e.g. 
`sc.makeRDD(1 to 100000000, 100).map(t => (t, t)).reduceByKey(_ + _).count`)
- Some other small optimizations throughout (see commit comments)


Project: http://git-wip-us.apache.org/repos/asf/incubator-spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-spark/commit/718cc803
Tree: http://git-wip-us.apache.org/repos/asf/incubator-spark/tree/718cc803
Diff: http://git-wip-us.apache.org/repos/asf/incubator-spark/diff/718cc803

Branch: refs/heads/master
Commit: 718cc803f7e0600c9ab265022eb6027926a38010
Parents: 51aa9d6 9837a60
Author: Reynold Xin <[email protected]>
Authored: Sun Nov 24 11:02:02 2013 +0800
Committer: Reynold Xin <[email protected]>
Committed: Sun Nov 24 11:02:02 2013 +0800

----------------------------------------------------------------------
 .../org/apache/spark/util/AppendOnlyMap.scala   | 93 +++++++++++---------
 1 file changed, 50 insertions(+), 43 deletions(-)
----------------------------------------------------------------------


Reply via email to