Github user shivaram commented on the issue:
https://github.com/apache/spark/pull/20414
@jiangxb1987 @mridulm Could we have a special case of using the sort-based
approach when the RDD type is comparable ? I think that should cover a bunch of
the common cases and the hash version will only be used when keys are not
comparable.
Also @mridulm your point about more things other than repartition being
affected is definitely true (just in this file `randomSampleWithRange` I think
is affected). I think the only way to solve this in general is to enforce
deterministic ordering when constructing ShuffleRDDs ?
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]