Github user jerryshao commented on a diff in the pull request:
https://github.com/apache/spark/pull/931#discussion_r13948953
--- Diff:
core/src/main/scala/org/apache/spark/rdd/OrderedRDDFunctions.scala ---
@@ -41,30 +45,92 @@ import org.apache.spark.{Logging, RangePartitioner}
* rdd.sortByKey()
* }}}
*/
+
class OrderedRDDFunctions[K : Ordering : ClassTag,
V: ClassTag,
P <: Product2[K, V] : ClassTag](
- self: RDD[P])
- extends Logging with Serializable {
+ self: RDD[P])
+extends Logging with Serializable {
private val ordering = implicitly[Ordering[K]]
+ private type SortCombiner = ArrayBuffer[V]
/**
- * Sort the RDD by key, so that each partition contains a sorted range
of the elements. Calling
- * `collect` or `save` on the resulting RDD will return or output an
ordered list of records
- * (in the `save` case, they will be written to multiple `part-X` files
in the filesystem, in
- * order of the keys).
- */
+ * Sort the RDD by key, so that each partition contains a sorted range
of the elements. Calling
+ * `collect` or `save` on the resulting RDD will return or output an
ordered list of records
+ * (in the `save` case, they will be written to multiple `part-X` files
in the filesystem, in
+ * order of the keys).
+ */
def sortByKey(ascending: Boolean = true, numPartitions: Int =
self.partitions.size): RDD[P] = {
+ val externalSorting =
SparkEnv.get.conf.getBoolean("spark.shuffle.spill", true)
val part = new RangePartitioner(numPartitions, self, ascending)
val shuffled = new ShuffledRDD[K, V, P](self, part)
- shuffled.mapPartitions(iter => {
- val buf = iter.toArray
+ if (!externalSorting) {
+ shuffled.mapPartitions(iter => {
+ val buf = iter.toArray
+ if (ascending) {
+ buf.sortWith((x, y) => ordering.lt(x._1, y._1)).iterator
+ } else {
+ buf.sortWith((x, y) => ordering.gt(x._1, y._1)).iterator
+ }
+ }, preservesPartitioning = true)
+ } else {
+ shuffled.mapPartitions(iter => {
+ val map = createExternalMap(ascending)
+ while (iter.hasNext) {
+ val kv = iter.next()
+ map.insert(kv._1, kv._2)
+ }
+ map.iterator
+ }).flatMap(elem => {
+ elem._2.iterator.map(x => (elem._1, x).asInstanceOf[P])
+ })
+ }
--- End diff --
This place might be the performance hot place, since we need to reconstruct
each tuple. But for current hash map implementation, seems no better solutions
to avoid this. I think we should figure out the performance of this
manipulation.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---